Categories
Angular Backend Developer Development Express Frontend Developers Node Programming

MEAN Stack Development Influences The Future Of Web Apps

The implementation and usage of Web app development is increasing and currently in a fast-moving realm. A highly competent architecture and navigation are in demand in today’s web applications. They need to be dynamic, user-friendly, robust, and flexible. With the developments and evolutions in technology leaves web developers with many choices for their app. One essential factor while choosing a suitable framework for the solution, it is essential to determine a software technology that combines the best features to work.

MEAN stack is a growing contemporary trend for JavaScript development. This stack is the one technology that meets all the requirements for a fully efficient development in the best possible way. An open-source JavaScript bundle for web apps, MEAN is an acronym that stands for:

  • M stands for MongoDB,
  • E stands for Express,
  • A for AngularJS and
  • N for NodeJS.
mean stack development
Mean Stack Development

Web developers find MEAN stack application development as an attractive choice and are switching to it as it is on the latest technology-go-to basis, the full-stack JavaScript. The flawless combination of these 4 robust technologies makes it the most sought out bundle for web app development services.

What makes this stack an ideal choice for developing a website is as follows:

  • Flexible in developing for any size and type of organization.
  • Best viable technology solutions for all Business segments from startups, SMEs, or large enterprises.
  • Straightforward for frontend and backend developers to apply this framework.
  • Suitable framework for any multi-layer application.
  • Immense benefit in terms of productivity and performance.

Knowledge of JavaScript language mechanisms from the presentation layer to the database is all you need to proceed with the MEAN stack software.

A brief look into the 4 Components of MEAN

MongoDB is the open-source, NoSQL database framework for JavaScript

  • Cross-platform document-oriented database model
  • A schema-less, independent NoSQL database
  • With JavaScript, it limits to a single language for the complete application development
  • Collects and stores the application’s database in MEAN
  • High scalability in both storage and performance
  • Cost-effective and useful in transferring both client and server-side data without compromising data access
  • Expandable resources, load balancing and handling increased activity durations

ExpressJS is the lightweight server-side JavaScript framework

  • Web application framework for NodeJS and simplifies the development process
  • Cuts down the entire process of writing secure code for web and mobile applications
  • Developers can include New features and enhancements
  • Minimal structure mainly used for backend development and aids decluttering
  • Building smooth server-side software with NodeJS
  • Prevents accidental redefinition of a variable, therefore, eliminating errors and time-saving

AngularJS is the web frontend JavaScript framework

  • A browser-free MVC JavaScript UI framework with data binding
  • Popular Google’s front end framework that enables smooth flow of information throughout the application
  • Enables rapid development of dynamic single-page web apps (SPA’s)
  • Modular structure and develops for both mobile and web
  • Easy-to-use templates and high scalability for full stack front end augmentation

NodeJS is the open-source JavaScript-based runtime framework

  • Built on Chrome’s JSV8 engine
  • Before execution, compiling JavaScript source code to native machine code is done
  • Helps build scalable, secure web applications and comes with an integrated web server
  • Maintains a vast ecosystem of open source libraries and components
  • Quickly responds to usage spikes during runtime

The reasons behind why the preference for this Stack Development software for Web Applications are as follows

  • Inexpensive in Nature

Due to its budget-friendly nature, development finds its main reason to be a cut above other technology frameworks in existence. Also, as it is a full-stack development, unnecessary expenditure over resources can be eliminated for the customers as well as the developers. Hence, a large volume of reusability and sharing of code amongst the developers occurs. This process, thereby, restrains the budget considerably.

  • Full JavaScript Framework

Since the framework is entirely JavaScript, it has its set of benefits to provide in terms of exceptional user experience and data handling. Both Linux, as well as Windows OS, are supported. Data recovery is speedy due to the power and dependability of the framework. Seeing that both NodeJS and AngularJS contribute to a better condition to build competent web apps and more traffic occurrences.

  • Universal Accessibility to JSON

It adds to the advantage of having a seamless expanse of data within layers because JSON is present all over, whether it’s AngularJS, MongoDB, NodeJS, or ExpressJS. The highlight is that rewriting the code is not necessarily required. Data flow between the layers is much more comfortable and not necessary to be reformatted. MEAN utilizes a standard JSON format without exception for data. Also, it becomes increasingly simpler while functioning with APIs.

  • Highly Scalable and so very Popular

Full-stack development with MEAN is scalable, and its ability to handle multiple users makes it a reliable choice and a business favorite. In addition to that, all four components are open source. The development time is also faster, owing to the presence of various frameworks, libraries, and reusable modules. Because of its swiftness in operation, easy to collaborate, easy to learn, and takes less time to develop cloud-native applications, it’s eventually a developer’s choice.

Being an open-source makes it available for free. MEAN can be easily deployed as it includes its web server. The development potential is on the higher side for many other JavaScript resources with this stack technology. Because of this, MEAN stack web development has made avid developers look ahead to work, and the built-in elements of JavaScript make it even more, easier to utilize resources in this sector.

  • Reusability and Fixing is much simpler

Streamlining the development process by using a single language across the whole application is possible. Thus it becomes easy for developers as it eliminates the need for different specialists for developing each part of any web application. It also enables easy tracking of the entire development flow, monitor data exchange, and catch sight of errors/bugs. This technology can be even more improvised with the help of a few third-party opensource tools and libraries that allow the frontend and the backend to reprocess quickly.

  • Lowered Development Expenses

A MEAN application penetrates the tech-world improvised to take advantage of all the cost savings and performance improvements of the cloud. The primary feature of this innovative technology is that it does not incur any needless expenses, thereby a large volume of concurrent users can be reached. Code reuse across the entire application reduces reinvention, and code sharing helps developers to reach their target objective within the committed time frame and allocated budget.

  • Enables Collaboration Between Developers

The stack technology has a lot of community support behind, and finding answers to questions or even hiring assistance can be obtained. All developers speak the same programming fundamentals, and so it’s effortless and efficient for them to understand the nuances of web app development mutually. The advantage of hiring MEAN stack developers is more since they can effectively understand, facilitate association, and manage the project with ease.

  • Access to Congruent Coding

MEAN stack helps to transfer within frames, i.e., the server-side and the client-side. Creating code in one framework and transferring to another is achievable without any difficulty or disruption in performance. It is yet another critical feature of this technology in comparison to the rest.

  • Systematic & Exceptionally Flexible

It is incredibly swift to prototype because the stack has its internal web server that enables opening without difficulty, and the database can be systemized on-demand to contain momentary usage spikes. Consistent language and flexibility give it an added competitive edge for developers.

Some of the famous and familiar websites that use MEAN stack are Netflix, Uber, LinkedIn, Walmart, PayPal, and Yahoo. The web development frameworks and databases are enhancing every day. This is the most suitable stack technology for cutting-edge web and mobile applications development.

Categories
Backend Developer Development Express Javascript Node

What Really Makes Node.js Great?

Javascript- That is actually playing a drastic role to make Node.js great. As it is well known that Javascript is favorite to most of the software developers to build applications because Node.js uses javascript as its main framework. The Node.js community is used to create a scalable application.

After specialising in Ruby on Rails for a while, I realised that the time has come for me to expand my skill set. I wanted to move on from working move on from working mostly on small-scale apps into more ambitious projects for large corporate clients. Node.js was the obvious choice, because I already knew JavaScript and was aware that the market value of Node.js skills would only grow.

-Michal Hartwich
nodejs


Why Node.js is a standard programming language to enterprises for large-scale apps?

REASONS:

  1. Make real-time applications fast.
  2. In node.js coding in JavaScript is possible for both the client and server-side.
  3. Increase efficiency in the developing process.
  4. Execution of coding is faster than other languages and boost productivity.
  5. As well as perfect for Microservices.

You read about Node.js and its uniqueness that makes it great among all. Lets first start with the intro part.

Why it is great?

Node.js is a cross-platform, and open-source, JavaScript run-time environment that executes JavaScript code outside the browser. Though .js is a filename extension for JavaScript, that allows the developer to use Javascript to write the command lines for server-side scripting to produce dynamic web page content.

Corporate users of Node.js software are LinkedIn, Netflix, Microsoft, and so on. As Node.js has an event-driven architecture that capable of asynchronous I/O (It is a form of input/output processing that permits other processing to continue before the transmission). It aims to optimize scalability in web applications.

Helping Hands- Node.js Frameworks:


To smoothen your application development, here are some excellent Node.js frameworks to build your applications more effectively.

1. Design your web application with Express.js


Express is a web application framework for Node.js developed by T J Holowaychuk. It is free and open-source software under the MIT License. It is designed to build web applications and APIs.

The other name for Express.js is a minimalist framework. It is a single-threaded, open-source and fast Node.js framework used for web and mobile app designing. This framework is easy to understand. It allows easy integration of third-party services and seamless I/O approach.

Several NPM modules can be integrated into Express.js for better operations. The back-end of a MEAN stack is developed by Express.js and MongoDB, with AngularJS in the front-end. Express.js is described by an event loop and non-blocking I/O.

2. Developer supportive- Nest.js


Nest.js is a server-side (backend) application framework to support the developer’s productivity. Nest framework is a TypeScript characterized by Dependency injection, authentication, stack, ORM and REST API.

nestjs


Development with TypeScript using nest.js employs code writing by juxtaposing various programming types such as functional programming, object-oriented programming, functional reactive programming, etc. However, the tools used in nest.js need improvement for enhanced performance.

It is an event-based that allows the scalability of microservices. Nest.js provides the developer flexibility that allows using any libraries of Node.js. It is versatile as well as progressive.

3. Ironic Meteor.js


This Node.js framework employs isomorphic JavaScript code for mobile and web app development. Meteor.js is an MVC-based and open source and provides a highly effective front-end solution. It allows a highly efficient client-server interface, seamless debugging, live reload, database integration and real-time testing.

Meteor.js

Though it has few loop-holes like lack of server-side rendering, the absence of in-build PWA support, the absence of MongoDB support, etc. , meteor.js still stands out as an excellent framework with its simplicity and versatile libraries and packages.

OMG! It amazes the developer’s experience.


Node.js is used for real-time application and Node.js is open source cross-platform runtime environment has been written in JavaScript. The top choice of a developer in the web application development is Node.js because it comes with new features.

It also allows developers to make web servers and networking tools with effective use of JavaScript.

  • Node.js Is Very Fast:

Node.js use V8 engines by Google. Google engines V8 (JavaScript → V8(C++) Machine Code) which compile JavaScript into native machine code and runs very speedy and because of it, Node.js increase any framework’s speed. It helps enterprises build fast, flexible network applications that can handle parallel connections with high throughput.

  • Stimulate Sharing:

Node.js boosts imparting to the approach the Node Package Manager or NPM. With the inbuilt NPM, developers can restore, share or reuse codes within with clicks. Along these lines, it tends to be proved that Node.js bundle supervisor is strong and a steady answer for designers.

  • Node.js Make For Real-Time Web Application:

It has unbelievable features when it comes to creating real-time applications such as chats and gaming apps. Also good for the programs that need an event-based server, as well as non-blocking driven servers.

  • Single Code Base Web Application:

The developer writes JavaScript for server and client, by making it easy to send data between the server and the client for synchronization.

  • Data Streaming:

Web frameworks treat HTTP requests and responses as whole data objects. Node.js is very good at handling I/O, So it can use as an advantage for the developer to build some amazing web application. Node.js can also read and write streams to HTTP.

  • Every Developer Know JavaScript:

Every web coder has coded a little bit of JavaScript, even if that JavaScript was hacking a jQuery plugin. Nowadays, it is hard to find web developers. And Node.js is easy to learn if the developer familiar with JavaScript.

  • Increase Productivity:

Productivity must not be sighted as a one-dimensional character, but a multidimensional one. A developer is more productive in Java because of compile-time errors. Enterprises can add the frontend and backend teams into a single unit so application delivers in less time with high productivity.

Even if we had a language that made writing callbacks easy (like C#), we’d need a platform that minimal to no blocking I/O operations.

Node.js- What So Special..!!

The Node.js server technology is used to create & run high variety of web applications, it is quite similar to Ruby on Rails does. Its main language is JavaScript which is lightweight and manages the plenty plugins via Node Package Manager (NPM), that allows the developers to custom-build applications.

There are no different languages for back-end and front-end developers, Node.js uses the same language for both of them that make it easy to create apps.

The Back-end applications directly interact with the database via an application programming interface (API) that pulls, saves, or changes data, whereas the Front-end codes interact with back-end sever to drag out the data on the front-end user interface (UI) side.

But with Node.js, both (Front-end and Back-end) merged and work together instead of pulling each other to different directions of working.

Every developer is familiar with JavaScript which is the main reason for Node.js to build and compile applications easily, there is no other language to create or build any API and UI applications in node.js. It decreases the complexity by performing the code.

Node.js boost both front-end and back-end and acts as the best of technology to develop apps. This means that your team will be more efficient and cross-functional leading to lower development costs. In addition to this, it is worth mentioning that JavaScript makes your codebase with much easier platforms. Secondly, you can also then reuse and share the code between the backend and the frontend of your application that speeds up the development process.

The complete js technology stack is open-source & free and that is good news. Finally, Node offers you a complete package for every possible thing that developers are looking for.

TOP 5 SEARCH ENGINE QUERIES ON NODE.JS


Why Node js is Used?

Node.js is an excellent platform built on Chrome’s JS runtime for developing, building a fast, scalable for network applications.

Is Node.js a programming language?

Node.js connects the efficiency of a scripting language with the power of Unix network programming. It is proficient with internet fundamentals such as HTTP. Therefore, Node.js is not a programming language but a Scripting language in JavaScript.

Is Node.js same as JavaScript?

Simply, JavaScript is a programming language that runs in any browser of JavaScript Engine, but on the other hand, Node.js is an interpreter of JavaScript, which holds the relative or you can say similar libraries of it for better use. So, they are not the same but, follows similar libraries to build applications.

What is Great about Node js?

The great thing about Node.js is, it uses event-looping instead of the strings. Excessing web services, reading along with the files and then making a file to wait until it’s completely get uploading because such operations are mostly slow then the memory execution operations. Every input/output operation in the Node.js is asynchronous. Sever continue to process all the Input-output operations. It makes scaling and processing between the programming events easy.

What are the benefits of using node.js for a startup?

benefits of Node.js:
1. Node.js is scalable.
2. It boosts your development speed.
3. Requires a low learning curve if you are familiar with JavaScript.
4. Take less time to run any program.

Categories
Business Database Development Express Resume Startup

Why Do You Need To Hack Your Company?

The increased number, and expanding knowledge of hackers, combined with the growing number of systems vulnerabilities it might be possible one-day your computer systems are hacked or compromised in some ways. Protecting your system from hackers and generic vulnerabilities are critical.

To catch a theif you must think like a thief” This is the exact way that companies are following these days to protect their systems and network from getting hacked. When you get an idea about the tricks that hackers use, you can find out how vulnerable your systems really are. Hackers prey on weak security and outdated system infrastructure.

why you need to hack your own company


Security is one of the biggest concerns of every company and when it comes to cybersecurity it becomes more important. Business owners view cybersecurity as a drastic threat and self hacking seems to them a right measure. The idea may look extreme and daunting, but it’s a necessity.

Hacking your own company or systems means hiring an ethical hacker. An ethical hacker also referred to as a white-hat hacker, the hacker assessed the security of computer systems by looking for weaknesses and vulnerabilities in target systems using the same knowledge and tools as a malicious hacker, but in a lawful and legitimate manner.

This is an information security expert who systematically attempts to penetrate or hack a computer system, network, application or another computings resources of behalf of its owners and with their permission to find security vulnerabilities that a malicious hacker could potentially exploit.
Malicious hackers are popular as black-hat hackers who are considered as cybercriminals. Black Hats maliciously misuses data and exploit with security flaws for personal or political or for or reason but to only create chaos.

“Social engineering is bypass all technologies, including firewalls.”

Kevin Mitnick – an American computer security consultant, author, and convicted hacker

Ethical hackers use their skills, methods, and techniques to test and bypass organizations’ IT security. They encounter the vulnerabilities which can be exploited by blackhat hackers.

Ethical hackers document those vulnerabilities and advice about how to remediate them so organizations can strengthen their overall security. There are some major reasons why you need to hire an ethical hacker to hack your company:

Keep An Eye On Your Company Security:-


The primary advantage of having ethical hackers on a company’s payroll. These hackers are allowed to test a company’s security measures in a controlled and safe environment.

These hackers can help companies to know which of their computer security measures are effective, which measures need to update. The data from these tests allows management to make decisions on where and how to improve their information security. Mobi

With the ever-expanding digital technology, you can never be sure that your online business has completed protection from malicious hackers. You should be aware that your company can get hacked at any moment as there is always a possibility of a generic vulnerability in your system that hackers can use to harm your company. However, there are ways to ensure that your business will not or least suffer from a potential cyber attack.

Finding Vulnerabilities For Your Company:


This is a new strategy to fight against cybercrime, it is still considered as one of the least conventional methods to protect your business.

Ethical hackers can discover their vulnerabilities first before a malicious hacker could utilize them. Companies only hire white-hack hackers, as they obligated to find any vulnerabilities and report those to harden their network security by eliminating them.

Kind of vulnerabilities a company can have-

  • Human error
  • Criminal Activity Inside Your Organization
  • Hackers and other cybercriminals
  • Unsecured Endpoints
  • Third-Party Apps
  • Cloud Storage Apps
  • Inadequate Data Backup
  • Unprotected Sensitive Data
  • Stolen or Lost Smartphones or Tablets

Companies these days are also adding phone tracker to exployes incase they lost their devices.

When the white-hat hackers explore the company’s system or hack the company, they find the system’s vulnerable areas. These areas can be related to technology, such as a lack of sufficient password or in human-based systems. The exposure of these vulnerabilities allows management to install more secure procedures to prevent attackers from exploiting.

Ethical hackers identify security vulnerabilities: How?

  • Check to see if all operating systems and software are up to date. 
  • Evaluating the physical security of your network.
  • Perform a full vulnerability assessment.

In April 2017, the US air force announced its first-ever bug bounty challenge, to see how hackers can hack into the warfare service branch. They would be able to see their flaws from a different set of eyes, remedy those flaws, and boost their network security further.

The Traditional Way Is Outdated:

The traditional way(anti virus) is outdated


As technology expands, it poses a threat to your security as well. Because it makes your security’s precautions outdated.

Most enterprises have a set of traditional technologies such as data loss prevention, web proxies, endpoint protection, vulnerability scanners, antivirus, firewall and more. Each one is good at what it was built to do and serves an important purpose, yet we continue to see breaches due to miss threat alerts.

But simply purchasing every new tool or security product is not the answer. From the individual user to the small business to the large enterprise, it is important to make investment decisions for cybersecurity in a risk management construct that includes trying to secure the biggest bang for the buck

Purchasing anti-virus and other tools software and installing them on your computers, It’s a temporary way to keep malware away from your system. As it can only postpone an attack for a certain amount of time before hackers find other ways to bypass your current security measures.

Business owners have the right to view cyberattacks as a drastic problem. Thus, they have the right to believe that a drastic solution can solve a drastic problem. Self hacking is the only solution to this drastic problem.

Modern Trends Of Cyber Attacks:

modern cyber attack for  data breaching in 2019


Cybercriminals (Black Hat Hackers) use modern and advance tools for breaching user security and data. As results, more than 4.5 billion cases of security breach were recorded in 2018. You should get to know about these rising trends of 2019:-

Advanced Phishing:

Phishing is one of the successful cyber attacks due to its speed. Phishing sites stay online only for two or three hours and users can hardly report them. This is the reason only 65% of URLs are trustworthy over the internet.

There are advanced phishing attacks kits available on the dark web. These kits provide basic technical knowledge to run their own phishing attacks. Thus phishing becomes an even more dangerous attack method.

Remote Access Attacks:

Remote access attacks the most common attack which is growing rapidly, as well as becoming more advanced. Hackers target computers, smartphones, internet protocol (IP) cameras, and network-attached storage (NAS) devices since these tools usually need to have ported to open and forwarded to external networks or the internet.

Smartphone Attacks:

The fact that users typically hold all their information on their phone, and that smartphones are now used for two-factor authentication – one of the most widely used cybersecurity tools – increases the security risk if the device is lost or stolen.

Smartphone attacks are related to unsafe browsing people use their phones to manage financial operations or handle sensitive data outside the security of their home network, this becomes a prominent threat.

Demonstrate The Methods Used By Hackers:

“The guardian of the company’s cybersecurity should be encouraged to network within the industry to swap information on latest hackers tricks and most effective defense.”

NIna Easton


White hat hackers can also demonstrate the methods and techniques which is used by unethical invaders. These demonstrations help to show the company how thieves, terrorists, and vandals can hack their systems and destroy their businesses.

If companies know how black hat malicious are breaching their security, they can prevent those invaders from using those techniques to penetrate their vulnerable systems.

There are dozen of methods to hack a system and breaking down those methods can get rid of a company from the future security issue.

  • Social Engineering
  • Phishing
  • By-passing passwords
  • Open wi-fi or fake wap
  • Daniel of service
  • viruses trojan etc.
  • cookie theft
  • click and bait & switch

Preparing For Future Cybercrime strategies:

preparing for future cyber crime strategies


However, most companies are entirely unprepared for cyberattacks although it can destroy their business, especially if you have a small company. Hacking your own system will help you to understand how threats operate and how these black hat hackers use new information and techniques to attack systems.

Hiring an ethical hacker can help your Security professionals to prepare for future attacks because they can better react to the constantly changing nature of online threats and hacking. There are some key strategies you can apply for your future cybersecurity-

Raising Awareness:

The cybersecurity education campaign is essential for raising public awareness of the risk, and impact of cyber activity and the need to apply basic protective measures on desktops, laptops, tablets, phones and other mobile devices. 

Cybersecurity education should cover the basics:

  • Use strong passwords.
  • Apply system updates in a timely and efficient manner.
  • Secure devices by enabling a firewall and deploy solutions to address viruses, malware and spyware.
  • Learn not to click on email links or attachments, unless the sender is known and trusted. Even then, phishing emails sometimes spoof the sender’s identity to trick the user into clicking a link or attachment.

Secure Sofware development:

Mostly data breaches are the only happened because of frailties in software code. Software development is the first line of defense and if it’s weak, the hackers will find the holes. Hiring a good quality software developer will help you to develop strong code software.

Risks Mitigation:

Identifying and tracking risks, working out future potential risks and planning ahead to avoid risks. Developing a risk mitigation plan involves drawing up how a business or project will react in the face of risk, and what actions are needed to be taken to reduce the threat of these risks.


There are some benefits which are very crucial elements for company growth-

Development and quality assurance:


A strong concentration requires for security testing. As it is often ignored, which leaves your software vulnerable to threats.

An ethical hacker who is trained well can provide significant strength to a team by helping them to conduct security testing efficiently and successfully. It makes more reliable to the software by house practicing that requires more time and energy.

The concept of hacking has led to the development of certain tools to remove prominent and common vulnerabilities. This makes it easier for the developer to learn coding errors which can be avoided.

Professional Development:


There is a major difference between the requirements for workers with cybersecurity skills. Approximate, 350,000 jobs in the field of cybersecurity are vacant in the United States, which is further expected to increase tenfold by 2021.

The company who want to keep their hackers and cybersecurity talent within their companies, it serves as a promising opportunity for potential hackers, and people interested in this specific field.

Reduce Losses Of Company:

reduce loss of company


Working with an ethical hacker can help reduce the company’s losses in the event of a breach in two ways:

  • If you are breached or hacked, a hacker may be able to locate the vulnerability much faster, preventing an ongoing attack.
  • When you hire an ethical hacker, you can request that they provide an employee loyalty/honesty bond or other insurances coverage that will reimburse your company experience losses as a result of their activities.


The biggest online companies like Facebook, Twitter even Microsoft hire hackers to crack into their servers for a decent price. Because they can discover their vulnerabilities first before a malicious hacker could utilize them.

Apart from this, there is a Google Chrome Vulnerability Reward Program, which rewards (Rewards for qualifying security bugs typically range from $500 to $150,000) hackers who invest their time to encounter vulnerable bugs from chrome and Google patches those vulnerability to make Chrome more secure.

Categories
Business Development Express Resume Startup

How To Validate Your Startup Idea

Startup idea


Everyone is excited about starting their own business. Countless people try their luck every year but most of the business failed due to lack of proper planning.
Approximately 45% of startups failed because they didn’t reach their target market. To run a startup successfully, you need to validate your idea systematically for making your product successful in the market.

There are some essential steps to follow for validating any startup idea –

Market Validation Of The Product


Market validation is comprised of surveys and feedbacks from your target market. It is a process to know about the interest of your target market whether your product is market fit or not.

Share Idea:


Most of the people make a mistake that they keep their startup idea with themselves. Sharing your idea with the right people and taking their views on it is the initial step of validating your startup idea.

Tell anyone and everyone your idea without fear they are going to steal it

Aaron Patzer, founder of mint.

You can take quality advice from those who are already in that business and ask them directly specific questions and follow successful entrepreneurs on LinkedIn who have accurate information about a specific industry. You can collect more quality information and advice if you might get a chance to meet any of them.

Analyze the competition:


The first thing you should always keep in mind is analyzing your competition. you should analyze their business and find some areas they are lacking as you can focus on those areas. For E.g If you are planning to launch an operating system in the market and surely there is a huge market for this but windows is already dominating the market and another hand, Mac and Linux are also doing very well. So it is hard to find space for your product and even if you made it in the market, you can’t compete with the level of resources and depth in the market they have.
It is important to know your competitors very well. By analyzing them you will learn about their mistakes and how they overcame as well as the strategies they followed for success. You can apply it to your startup.

Evaluate Whether Your Idea Is Profitable:


You should research whether your product is profitable or not because you can’t invest your own money all the time your business should generate revenue for you to sustain your business. It does not make sense if you are about to start a business and you don’t know whether it has the potential for profitability or not.
If you are going to launch your startup you need proper planning that will make your business profitable.

Prepare Your Product Concept:


The purpose of creating a product concept is to finding key questions for testing in the market. These questions could relatable to the problems that might occur to your target market.

  • Who is your customer: There is a specific market for every business so you have to make sure which one you are going to target and how big the market is. If your product is for a particular market so what is your title for the buyer.
  • Why are you in the market: You should be aware of your target customer’s problems. When you will get to know your customer’s problem you will be able to validate and solve them.
  • How your product will help: You can elaborate to customers how it will help them to resolve their problems. You can represent a prototype to your customer and by their feedback, you can improve your product further.
  • Key features: Your product should provide plenty of benefits to your customers as it will make their life easy and better. Like the value of money and time saving etc.

Interview With People:


You can hardly get people for free surveys because people don’t want to waste there time on any kind of free surveys. But this is a very crucial part to validate your startup idea it helps you understand customer’s needs and problems and it will give you some valuable insight.
You can start with a list of questions to learn more information and make sure you can visit them or secure their precious 10 minutes over a video call. This natural face to face interview is very essential to see their reaction. Some keys you should remember while interviewing any customer-

  • Thank them for their valuable time and tell them how they will help you to make the best product for customers.
  • Explain to them how your product is different from others that are already in the market and how you are going to tackle those issues.
  • Explain to them about your product and the nature of your business.
  • You should explain to them all versions of your product and ask them for their thoughts. Observe their body language and their reactions and their feelings and thinking about your startup Idea.
  • There is a difference between liking and buying a product. People only buying the product when that is needed by them. So if someone likes your product asks them if they want to buy that kind of product or not

Review And Decide:


Eventually, you have to review all feedbacks and decide what work would be best for your potential clients and how can you change or modify your product for your targeted market.
Lean market validation helps you to get enough information and data to make a decision. This concept helps a startup to validate and succeed.

Validate With MVP (Minimum Viable Product )


The MVP(Minimum Viable Product) is a technique to create or develop a product with sufficient features for early customers. The final product(having all features) is only designed by considering the feedback of early customers (users of the initial product). It is a prototype version of your product.

  • It has enough value for people to buy and use it initially.
  • It has enough future benefits for early customers.
  • MVP provides a future guide to develop your product for future development.

Idea Validation Landing Page


Building a prototype takes less time and effort than creating a complete product and it is an easy way to validate your startup idea. You can create multiple prototypes for better future development using customer reviews and basic market research.

Some important features you should keep in mind while creating a prototype –

Simple and Consistent with design:


The prototype should look professional. Means, it has to look like you already have established business. You have to make sure about design and image quality as you are going to sell your MVP. A prototype with user-friendly functionality and simple yet clean designed to attract your customer easily than a prototype with complex designing and functionality. The design and functionality of a product are important for brand building, improving conversions and adding legitimacy.

Product Description:


As you are writing a copy to describe the product, you need to address users problems and the solution you are providing as well as your product benefits or services.

A/B testing:

A/B testing


A/B testing is a very important feature during validation. It allows users to compare and vote between two or many products and this statistics analysis determines which one is best for conversion goal. It will give a better result to Build your MVP and better market strategy.

Categories
Express GraphQL Javascript Node

Nestjs Typeorm Graphql Dataloader tutorial with Typescript


# All source is available here, you can either download 
# or follow the tutorial below to understand 
# each and every component individually  


https://github.com/codersera-repo/typeorm-graphql-nestjs-dataloader-starter-kit 

Before we deep dive into integrating all three into a single project. And to take advantage of GraphQL query language, and Typeorm relational database with PostgreSQL (PSQL) or MySQL or any other DB. Let’s understand the individual pieces separately.

I assume that you have basic knowledge of ORMs, Express, Node, Graphql, NestJs. In case you are missing not, here is the brief introduction

GraphQL

GraphQL is a query language for the API. When a request (query in GraphQL world) triggers, It decides the data flows over the network.

Graphql trigger requests using a smart single endpoint unlike in traditional REST API. Where an endpoint is triggered according to the data and resource.

NestJS

NestJs is a framework used to serve our server needs. It uses Express and Fastify under the hood and has robust support for TypeScript. Which is designed and employed to make the backend structured that is in easy to maintain modules.

TypeORM

TypeORM is an Object Relational Mapping Tool that can be used with DataBase like Postgres, SQL, Mongo-DB. It supports multiple databases in the application and writing code in the modern JavaScript for our DataBase needs.

Lets us start building a basic author-books-genres program using TypeORM, NestJS, Graphql, RestAPI, Dataloader

1. Installing the NestJS and creating a scaffold project.

Either scaffold the project with the Nest CLI or clone a starter project (both will produce the same outcome).


npm i -g @nestjs/cli

nest new user-typeorm-graphql-dataloader

You can either choose yarn or npm, we are going with yarn route.

Install the dependencies for GraphQL, dataloader, typeorm, SQLite.

Once you have installed the new project, change your directory to the project we created and installed the following dependencies


yarn add dataloader graphql graphql-tools type-graphql typeorm graphql apollo-server-express voyager @types/graphql @nestjs/graphql sqlite3 @nestjs/typeorm

Create a .env file, where we will be putting our environment constants. For now, we will be using this file to populate the typeorm configuration and port where to run our servers. We are using SQLite for this tutorial purpose, but you can use any SQL database, typeorm supports multiple drivers

# .env 

TYPEORM_CONNECTION = sqlite
TYPEORM_DATABASE = data/dev.db
TYPEORM_LOGGING = true
TYPEORM_ENTITIES = src/db/models/*.entity.ts
TYPEORM_MIGRATIONS = src/db/migrations/*.ts
TYPEORM_MIGRATIONS_RUN = src/db/migrations
TYPEORM_ENTITIES_DIR = src/db/models
TYPEORM_MIGRATIONS_DIR = src/db/migrations
EXPRESS_PORT = 3000

Next, we need to create some directories, where our typeorm entities, typeorm migrations, and SQLite database are gonna resides


mkdir -p src/db/models  # our entites here
mkdir -p src/db/migrations # our migrations here
mkdir -p data # here our sqlite.db

Creating our Migrations

We gonna use typeorm CLI to generate our migrations, luckily typeorm migrations CLI come intact with typeorm package

  1. Author: We gonna create Author here
  2. Book: We gonna create Book here
  3. Genres: We gonna create Genres here
  4. BookGeneres: We gonna create Many-Many mapping between books and genres

That’s all the migration we need, below is the command to create typeorm migration since .env file already knows where to create migrations that we have already provided above.


 ts-node ./node_modules/typeorm/cli.js migration:create -n CreateAuthor    
 ts-node ./node_modules/typeorm/cli.js migration:create -n CreateBook 
 ts-node ./node_modules/typeorm/cli.js migration:create -n CreateGenre   
 ts-node ./node_modules/typeorm/cli.js migration:create -n CreateBookGenre 

The above commands should create 4 files in your src/db/migrations directory, here is how a single migration would look like.

# src/db/migrations/1563360242539-CreateAuthor.ts 

import {MigrationInterface, QueryRunner} from "typeorm";

export class CreateAuthor1563360242539 implements MigrationInterface {

    public async up(queryRunner: QueryRunner): Promise<any> {
    }

    public async down(queryRunner: QueryRunner): Promise<any> {
    }

}

Typeorm uses epoch time as the prefix for migrations to run migrations in order. You can populate your migrations now with the creation of the table, here is how your migration should look like.


import { MigrationInterface, QueryRunner, Table } from 'typeorm';

export class CreateAuthor1563360242539 implements MigrationInterface {

    private authorTable = new Table({
        name: 'authors',
        columns: [
            {
                name: 'id',
                type: 'INTEGER',
                isPrimary: true,
                isGenerated: true,
                generationStrategy: 'increment',
            },
            {
                name: 'name',
                type: 'varchar',
                length: '255',
                isNullable: false,
            },
            {
                name: 'created_at',
                type: 'timestamptz',
                isNullable: false,
                default: 'now()',
            },
            {
                name: 'updated_at',
                type: 'timestamptz',
                isNullable: false,
                default: 'now()',
            }],
    });

    public async up(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.createTable(this.authorTable);
    }

    public async down(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.dropTable(this.authorTable);
    }

}


import { MigrationInterface, QueryRunner, Table, TableForeignKey } from 'typeorm';

export class CreateBook1563360267250 implements MigrationInterface {

    private bookTable = new Table({
        name: 'books',
        columns: [
            {
                name: 'id',
                type: 'INTEGER',
                isPrimary: true,
                isUnique: true,
                isGenerated: true,
                generationStrategy: 'increment',
            },
            {
                name: 'title',
                type: 'varchar',
                length: '255',
                isNullable: false,
            },
            {
                name: 'author_id',
                type: 'INTEGER',
                isNullable: false,
            },
            {
                name: 'created_at',
                type: 'timestamptz',
                isPrimary: false,
                isNullable: false,
                default: 'now()',
            },
            {
                name: 'updated_at',
                type: 'timestamptz',
                isPrimary: false,
                isNullable: false,
                default: 'now()',
            }],
    });

    private foreignKey = new TableForeignKey({
        columnNames: ['author_id'],
        referencedColumnNames: ['id'],
        onDelete: 'CASCADE',
        referencedTableName: 'authors',
    });

    public async up(queryRunner: QueryRunner): Promise<any> {
      await queryRunner.createTable(this.bookTable);
      await queryRunner.createForeignKey('books', this.foreignKey);
    }

    public async down(queryRunner: QueryRunner): Promise<any> {
      await queryRunner.dropTable(this.bookTable);
    }

}


import { MigrationInterface, QueryRunner, Table } from 'typeorm';

export class CreateGenre1563360272082 implements MigrationInterface {

    private genreTable = new Table({
        name: 'genres',
        columns: [
            {
                name: 'id',
                type: 'INTEGER',
                isPrimary: true,
                isUnique: true,
                isGenerated: true,
                generationStrategy: 'increment',
            },
            {
                name: 'genre_name',
                type: 'varchar',
                length: '255',
                isNullable: false,
            },
            {
                name: 'created_at',
                type: 'timestamptz',
                isPrimary: false,
                isNullable: false,
                default: 'now()',
            },
            {
                name: 'updated_at',
                type: 'timestamptz',
                isPrimary: false,
                isNullable: false,
                default: 'now()',
            }],
    });


    public async up(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.createTable(this.genreTable);
    }

    public async down(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.dropTable(this.genreTable);
    }

}


import { MigrationInterface, QueryRunner, Table } from 'typeorm';

export class CreateBookGenre1563360276498 implements MigrationInterface {

    private genreBookTable = new Table({
        name: 'books_genres',
        columns: [
            {
                name: 'id',
                type: 'INTEGER',
                isPrimary: true,
                isUnique: true,
                isGenerated: true,
                generationStrategy: 'increment',
            },
            {
                name: 'book_id',
                type: 'INTEGER',
                isNullable: true,
            },
            {
                name: 'genre_id',
                type: 'INTEGER',
                isNullable: true,
            },
            {
                name: 'created_at',
                type: 'timestamptz',
                isPrimary: false,
                isNullable: false,
                default: 'now()',
            },
            {
                name: 'updated_at',
                type: 'timestamptz',
                isPrimary: false,
                isNullable: false,
                default: 'now()',
            }],
    });

    public async up(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.createTable(this.genreBookTable);
    }

    public async down(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.dropTable(this.genreBookTable);
    }

}

Once migrations are setup, you need to run Typeorm migrations now, Typeorm provides a very easy to use CLI to run all the migrations, it will create a migrations table in the database, where it will keep a record of all the migrations it has applied.


ts-node ./node_modules/typeorm/cli.js migration:run

Once you execute this command dev.db file in created in src/data folder, Use SQLite browser to view the database, it must have all the tables you created and also the migration table

Creating our Model entities to map our database tables.

Let’s create Entity now, we are going to use typeorm data mapper method for our models, However, if you want to use typeorm active records, then you just need to extend all your Models from BaseEntity class which can be imported from like


import { BaseEntity } from 'typeorm';
class Auther extends BaseEntity
......
# src/db/models/author.entity.ts

import {
  Column,
  CreateDateColumn,
  Entity,
  OneToMany,
  PrimaryGeneratedColumn,
  UpdateDateColumn,
} from 'typeorm';
import Book from './book.entity';

@Entity()
export default class Author {

  @PrimaryGeneratedColumn()
  id: number;

  @Column()
  name: string;

  @CreateDateColumn({name: 'created_at'})
  createdAt: Date;

  @UpdateDateColumn({name: 'updated_at'})
  updatedAt: Date;

  // Associations
  @OneToMany(() => Book, book => book.authorConnection)
  bookConnection: Promise<Book[]>;

}


import {
  Entity,
  PrimaryGeneratedColumn,
  CreateDateColumn,
  UpdateDateColumn,
  Column, OneToMany,
  JoinColumn,
  ManyToOne,
} from 'typeorm';
import BookGenre from './book-genre.entity';
import Author from './author.entity';
import { Field, ObjectType } from 'type-graphql';

@ObjectType()
@Entity({name: 'books'})
export default class Book {

  @Field()
  @PrimaryGeneratedColumn()
  id: number;

  @Field()
  @Column()
  title: string;

  @Field()
  @Column({name: 'author_id'})
  authorId: number;

  @Field()
  @CreateDateColumn({name: 'created_at'})
  createdAt: Date;

  @Field()
  @UpdateDateColumn({name: 'updated_at'})
  updatedAt: Date;

  @Field(() => Author)
  author: Author;

  // Associations

  @ManyToOne(() => Author, author => author.bookConnection, {primary:
      true})
  @JoinColumn({name: 'author_id'})
  authorConnection: Promise<Author>;

  @OneToMany(() => BookGenre, bookGenre => bookGenre.genre)
  genreConnection: Promise<BookGenre[]>;
}


# src/db/models/genre.entity.ts

import {
  Entity,
  Column,
  CreateDateColumn,
  UpdateDateColumn,
  OneToMany, PrimaryGeneratedColumn,
} from 'typeorm';
import BookGenre from './book-genre.entity';

@Entity()
export default class Genre {

  @PrimaryGeneratedColumn()
  id: number;

  @Column({name: 'genre_name'})
  name: string;

  @CreateDateColumn({name: 'created_at'})
  createdAt: Date;

  @UpdateDateColumn({name: 'updated_at'})
  updatedAt: Date;

  // Associations
  @OneToMany(() => BookGenre, bookGenre => bookGenre.book)
  bookConnection: Promise<BookGenre[]>;
}
# src/db/models/book-genre.entity.ts


import {
  Entity,
  PrimaryColumn,
  Column,
  CreateDateColumn,
  UpdateDateColumn,
  ManyToOne, JoinColumn, PrimaryGeneratedColumn,
} from 'typeorm';
import Genre from './genre.entity';
import Book from './book.entity';

@Entity()
export default class BookGenre {

  @PrimaryGeneratedColumn()
  id: number;

  @PrimaryColumn({name: 'book_id'})
  bookId: number;

  @PrimaryColumn({name: 'genre_id'})
  genreId: number;

  @CreateDateColumn({name: 'created_at'})
  createdAt: Date;

  @UpdateDateColumn({name: 'updated_at'})
  updatedAt: Date;

  // Associations
  @ManyToOne(() => Book, book => book.genreConnection, {primary:
      true})
  @JoinColumn({name: 'book_id'})
  book: Book[];

  @ManyToOne(() => Genre,  genre => genre.bookConnection, {primary:
      true})
  @JoinColumn({name: 'genre_id'})
  genre: Genre[];
}

That’s All. Once you have all the entities setup, you are ready to CRUD records in the database using repositories and map to the above entity models.

TypeORM Repositories as a global module to query the database

Since we took the data mapper route, we need to define repositories for each of our entities, so we are going to create a global service repo.service.ts which can be accessed across the nest application to query any table or entity. Before that, we have to configure TypeORM in the main module, which in our case is app.module.ts.

src/app.module.ts

@Module({

  imports: [
     TypeOrmModule.forRoot(), 
     RepoModule   // Don't worry, we will create this next
  ],  // to use typeorm
  controllers: [AppController],  
  providers: [AppService],
})
export class AppModule {}

Next create two files, repo.service.ts and repo.module.ts

# src/repo.service.ts

import { Injectable } from '@nestjs/common';
import { Repository } from 'typeorm';
import { InjectRepository } from '@nestjs/typeorm';
import Author from './db/models/author.entity';
import Book from './db/models/book.entity';
import Genre from './db/models/genre.entity';
import BookGenre from './db/models/book-genre.entity';

@Injectable()
class RepoService {
  public constructor(
    @InjectRepository(Author) public readonly authorRepo: Repository<Author>,
    @InjectRepository(Book) public readonly bookRepo: Repository<Book>,
    @InjectRepository(Genre) public readonly genreRepo: Repository<Genre>,
    @InjectRepository(BookGenre) public readonly bookGenreRepo: Repository<BookGenre>,
  ) {}
}

export default RepoService;
# src/repo.module.ts

import { Global, Module } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import RepoService from './repo.service';
import Author from './db/models/author.entity';
import Book from './db/models/book.entity';
import Genre from './db/models/genre.entity';
import BookGenre from './db/models/book-genre.entity';

@Global()
@Module({
  imports: [
    TypeOrmModule.forFeature([
      Author,
      Book,
      Genre,
      BookGenre,
    ]),
  ],
  providers: [RepoService],
  exports: [RepoService],
})
class RepoModule {

}
export default RepoModule;

Wow! We have just completed our setup for Typeorm, using data mapper and we have created migrations and models using typeorm one too many and typeorm many to many relationships.

Testing our first controller to see if we are able to query the database correctly

Open the app.service.ts file, we gonna use the existing function generated by nestjs CLI scaffolding to test our structure.

# src/app.service.ts

import { Injectable } from '@nestjs/common';
import RepoService from './repo.service';

@Injectable()
export class AppService {

  constructor(private readonly repoService: RepoService) {

  }

  async getHello(): Promise<string> { // querying database
    return `Total books are ${await this.repoService.bookRepo.count()}`;
  }
}
# src/app.controller.ts
......
  @Get()
  async getHello(): Promise<string> {
    return this.appService.getHello();
  }
.......

Once you have made changes to both app.service and app.controller file, you can run your project by ts-node src/main.ts. Once it runs, open the browser and you can see the following line

Total books are 0

Hurray! Your project is able to query the database. But that’s not all, We are going to employ the graphql integration into this project.

Integrating GraphQL with Nestjs and TypeORM

Since we already have installed the required packages above, in case you missed it, install the following package for graphql nestjs typeorm


yarn add type-graphql graphql dataloader @nestjs/graphql apollo-server-express
yarn add --dev @types/graphql 

Once the above packages are installed, we will modify our entities with type-graphql decorators, so the GraphQL types corresponding to entities are created inside our GraphQL schemas.

To create a type corresponding to the entity we will use @ObjectType() decorator from the type-graphql package. So you entities would look something like this

# src/db/models/author.entity.ts

......
@ObjectType()
@Entity({name: 'authors'})
export default class Author {

  @Field()
  @PrimaryGeneratedColumn()
  id: number;
.......

Once the type is exposed to the schema, we will start exposing our fields for the type using the @Field() decorator again from the type-graphql package.

Convert all your typeorm entities into GraphQL schemas.

Remember don’t annotate your associations with @Field(), we are going to deal with them separately.

Next, we have to import the GraphQL module in our app.module.ts so that NestJS will know about it. Change your app.module.ts file

......
import { GraphQLModule } from '@nestjs/graphql';

@Module({

  imports: [TypeOrmModule.forRoot(),
     RepoModule,
    GraphQLModule.forRoot({
      autoSchemaFile: 'schema.gql',
      playground: true,
    }),
  ],
........

GraphQL will generate the schema at the schema.gql file, in your root directory. You don’t have to worry about this file as this is maintained and updated by NestJS graphql as per the Schema, resolvers, and mutations.

Adding GraphQL TypeORM Resolvers which includes our mutations and queries.

Create a new directory inside src where resolvers for all the typeorm entities would be there


mkdir -p src/resolvers

In our resolvers directory, we will create our first resolver author.resolver.ts for the author entity.

In our resolver directory, we will create a directory to hold our graphql input types.


mkdir -p src/resolvers/input


Since we are working on the author resolver first. We will begin by creating the author.input.ts which will hold the input types for the author entity.

The first input type will be the one required for our create author mutation responsible for creating a new author record in our database.


#src/resolvers/input/author.input.ts

import { Field, InputType } from 'type-graphql';

@InputType()
class AuthorInput {
  @Field()
  readonly name: string;
}

export default AuthorInput;

As we have set up input for our resolver, we will start with our first resolver. Each resolver class is annotated with @Resolver decorator which is imported from the nestjs-graphql package (@nestjs/graphql).


#src/resolvers/author.resolver.ts

import { Resolver } from '@nestjs/graphql';

@Resolver()
class AuthorResolver {
}

After we have created the class for the resolver, we will add the repo service as a dependency in the constructor. As our RepoService is Global, so no need to worry about providing this service at the module level.


#src/resolvers/author.resolver.ts

import { Resolver } from '@nestjs/graphql';

@Resolver()
class AuthorResolver {
   constructor(private readonly repoService: RepoService) {}
}

To create a query or mutation the class field should be annotated with a @Query() or @Mutation resolver respectively again import from NestJS-GraphQL package (@nestjs/graphql).


#src/resolvers/author.resolver.ts

import { Args, Mutation, Query, Resolver } from '@nestjs/graphql';

@Resolver()
class AuthorResolver {
   constructor(private readonly repoService: RepoService) {}

  @Query(() => [Author])
  public async authors(): Promise<Author[]> {
    return this.repoService.authorRepo.find();
  }
  @Query(() => Author, {nullable: true})
  public async author(@Args('id') id: number): Promise<Author> {
    return  this.repoService.authorRepo.findOne(id);
  }

  @Mutation(() => Author)
  public async createAuthor(@Args('data') input: AuthorInput): 
    Promise<Author> {
      const author = this.repoService.authorRepo.create({name: 
      input.name});
      return  this.repoService.authorRepo.save(author);
  }
}
export default AuthorResolver;

To create the above the mutations and queries in our GraphQL schema. We have to add all the resolvers to our main module which in this case is app.module.ts imports. To keep things simple we will create a separate array of resolvers, and use the ES6 spread operator to import it.

#src/app.module.ts

........
const graphQLImports = [
  AuthorResolver,
];

@Module({
  imports: [TypeOrmModule.forRoot(),
    RepoModule,
    ...graphQLImports,
    GraphQLModule.forRoot({
      autoSchemaFile: 'schema.gql',
      playground: true,
    }),
  ],
  controllers: [AppController],
  providers: [AppService],
})
export class AppModule {}

The above resolver was an easy part, as it did not involve any associations to deal with. Though at the database level, the associations are handled by the typeorm. Each associative property needs to be resolved for the GraphQL.

Now we will begin writing out resolver for the Book entity. The input type for the Book will have two options. Either create the book for an existing author, by his id. Or create a new book and a new author for the same

  1. To connect to an existing author in the database by his ID
  2. To create a new author along with the book.

The input type for the book is shown below with the above functionality implemented.


#src/resolvers/input/book.input.ts

import { Field, InputType } from 'type-graphql';
import AuthorInput from './author.input';

@InputType()
class BookAuthorConnectInput {
  @Field()
  readonly id: number;
}

@InputType()
class BookAuthorInput {
  @Field({nullable: true})
  readonly connect: BookAuthorConnectInput;

  @Field({nullable: true})
  readonly create: AuthorInput;
}

@InputType()
class BookInput {
  @Field()
  readonly title: string;

  @Field()
  readonly author: BookAuthorInput;
}

export default BookInput;

According to the associations defined in our typeorm migrations and typeorm entities. Each book must have one author. And from our GraphQL playground while fetching records for books the author record for the book can also be fetched. To fetch the author record we need to resolve the property. And to do so we will use @ResolveProperty() decorator from the @nestjs/graphql package.

For using the @ResolveProperty() you must pass the entity to the @Resolve(), decorator.

The @ResolveProperty() will expect the property to resolve as a parameter. Or the corresponding function name should match the name of the property to resolve. And GraphQL parent objects to its corresponding function.


#src/resolver/book.resolver.ts
......
  @ResolveProperty()
  public async author(@Parent() parent): Promise<Author> {
    return this.repoService.authorRepo.findOne(parent.authorId);
  }
......

Now we will begin with the final block of our application the Genre resolver. Below is the input type for the Genre and GenreBook

#src/resolvers/input/genre.input.ts

import { Field, InputType } from 'type-graphql';

@InputType()
class GenreInput {
  @Field()
  readonly name: string;
}
export default GenreInput;


#src/resolvers/input/book-genre.input.ts
import { Field, InputType } from 'type-graphql';

@InputType()
class GenreBookInput {
  @Field()
  readonly genreId: number;
  @Field()
  readonly bookId: number;
}

export default GenreBookInput;

Now that is finished with the input types for the Genre and BookGenre, we will create resolver for the same.

#src/resolvers/genre.resolver.ts

import { Args, Mutation, Parent, Query, ResolveProperty, Resolver } from '@nestjs/graphql';
import RepoService from '../repo.service';
import Author from '../db/models/author.entity';
import Book from '../db/models/book.entity';
import BookInput from './input/book.input';
import Genre from '../db/models/genre.entity';
import GenreInput from './input/genre.input';
import BookGenre from '../db/models/book-genre.entity';

@Resolver(Genre)
class GenreResolver {

  constructor(private readonly repoService: RepoService) {}
  @Query(() => [Genre])
  public async genres(): Promise<Genre[]> {
    return this.repoService.genreRepo.find();
  }
  @Query(() => Genre, {nullable: true})
  public async genre(@Args('id') id: number): Promise<Genre> {
    return this.repoService.genreRepo.findOne(id);
  }

  @Mutation(() => Genre)
  public async createGenre(@Args('data') input: GenreInput): Promise<Genre> {
    const genre = new Genre();
    genre.name = input.name;
    return this.repoService.genreRepo.save(genre);
  }

  @ResolveProperty()
  public async book(@Parent() parent): Promise<Book[]> {
    const bookGenres = await this.repoService.bookGenreRepo.find({where: 
    {genreId: parent.id}, relations: ['book']});
    const books: Book[] = [];
    bookGenres.forEach(async bookGenre => books.push(await 
      bookGenre.book));
    return books;
  }
}

export default GenreResolver;
#src/resovlers/book-genre.resolver.ts

import { Args, Mutation, Parent, Query, ResolveProperty, Resolver } from '@nestjs/graphql';
import RepoService from '../repo.service';
import Author from '../db/models/author.entity';
import Book from '../db/models/book.entity';
import BookInput from './input/book.input';
import Genre from '../db/models/genre.entity';
import GenreInput from './input/genre.input';
import BookGenre from '../db/models/book-genre.entity';
import BookGenreInput from './input/book-genre.input';
import { Arg } from 'type-graphql';

@Resolver()
class BookGenreResolver {

  constructor(private readonly repoService: RepoService) {}
  @Mutation(() => BookGenre)
  public async createBookGenre(@Args('data') input: BookGenreInput): Promise<BookGenre> {
    const bookGenre = new BookGenre();
    const {bookId, genreId} = input;
    bookGenre.bookId = bookId;
    bookGenre.genreId = genreId;
    return this.repoService.bookGenreRepo.save(bookGenre);
  }

  @Query(() => [BookGenre])
  public async bookGenres(): Promise<BookGenre[]> {
    return this.repoService.bookGenreRepo.find();
  }

  @Query(() => BookGenre)
  public async bookGenre(@Arg('id') id: number): Promise<BookGenre> {
    return this.repoService.bookGenreRepo.findOne(id);
  }
}

export default BookGenreResolver;

The queries and mutations created can be run on the GraphQL playground, you can navigate to GraphQL playground on the route “/graphql”

The GraphQL playground.

The queries and mutations to run can be written on the left side of the graphql playground whose result is displayed on the right side after clicking the play button.

For your reference, below are some graphql queries and mutations with their results fetched from the playground.


// mutation to create an author
mutation {
  createAuthor(data: {
    name: "Sahil"
  }) {
    id
    name
    createdAt
    updatedAt
  }
}
Create Author GraphQL Mutation
Authors GraphQL Query.

The problem in the current approach.

The problem is not an obvious problem i.e our code is not going to blow up everything is going to work, the problem is an intuitive problem, using the dataloader we can make our application much more efficient. To illustrate the current problem we will fetch the books record using our genres query.

In the above query whenever the book is fetched it triggers the @ResolveProperty() method and the book records are fetched. i.e for n Genres n database queries will run and a total of n+1 queries will be executed. The database connections are the most expensive task we have. To resolve this problem we will use dataloaders.

Introduction To Dataloader

The Dataloader is a generic utility developed by facebook used to abstract request batching and caching.

The dataloader will wait for a single event loop cycle before it executes. And it calls back function and by the time an event loop cycle is completed, all the Book ids for the Book records to be fetched will have arrived. And instead of running n queries for n Genres we will run a single query. Which is a huge improvement over the previous approach

To use dataloader we need the dataloader package. Which in case you haven’t installed already can be installed by the command mentioned below.


yarn add dataloader

Now that our package is installed we will make the directory where our packages will sit.


mkdir -p src/db/loaders

Now that our directory for the resolver is created. We will write our loader to fetch Books based on the Genre Ids passed.

#src/db/loaders/books.loader.ts

import DataLoader = require('dataloader');
import Book from '../models/book.entity';
import { getRepository } from 'typeorm';
import BookGenre from '../models/book-genre.entity';

const batchBooks = async (genreIds: string[]) => {
  const bookGenres = await getRepository(BookGenre)
    .createQueryBuilder('bookGenres')
    .leftJoinAndSelect('bookGenres.book', 'book')
    .where('bookGenres.id IN(:...ids)', {ids: genreIds})
    .getMany();
  const genreIdToBooks: {[key: string]: Book[]} = {};
  bookGenres.forEach(bookGenre => {
    if (!genreIdToBooks[bookGenre.genreId]) {
      genreIdToBooks[bookGenre.genreId] = [(bookGenre as any).__book__];
    } else {
      genreIdToBooks[bookGenre.genreId].push((bookGenre as any).__book__);
    }
  });
  return genreIds.map(genreId => genreIdToBooks[genreId]);
};
const genreBooksLoader = () => new DataLoader(batchBooks);

export {
  genreBooksLoader,
};

The loaders are passed to each of our queries and mutations as a part of the context. Therefore, we will now begin writing our GraphQL context type.


mkdir -p src/types/
#src/types/graphql.types.ts

import { genreBooksLoader } from '../db/loaders/books.loader';

export interface IGraphQLContext {
  genreBooksLoader: ReturnType<typeof genreBooksLoader>;
}

Once the types for the GraphQL context created, we will create a GraphQL context. The context gets created in our GraphQL module present in the main app.module.ts

#src/app.module.ts

......
@Module({

  imports: [TypeOrmModule.forRoot(),
    RepoModule,
    ...graphQLImports,
    GraphQLModule.forRoot({
      autoSchemaFile: 'schema.gql',
      playground: true,
      context: {
        genreBooksLoader: genreBooksLoader(),
      },
    }),
  ],
  controllers: [AppController],
  providers: [AppService],
})
......

Now that the loader is available to each of our queries and mutation via context. We can use it and modify our book to resolve property in our Genre resolver.

#src/resolvers/genre.resolver.ts

........
  @ResolveProperty()
  public async book(@Parent() parent, @Context() {genreBooksLoader}: 
  IGraphQLContext): Promise<Book[]> {
    return genreBooksLoader.load(parent.id);
  }

........

The author’s property is now resolve using the dataloader function exposed in our GraphQL context. And we have resolved the n+1 problem instead of running n database queries. The records will now be fetched using a single query. Thanks to Dataloader.

Also Learn TypeORM With NEST JS Basic Tutorial

Categories
Development Express Node Remote developer Resume Top Coder

Flutter Tutorial: How to Create Your First Flutter App

What is Flutter?

Flutter is the Google Mobile App Development SDK that enables your product to simultaneously target both Android and iOS platforms without the need for two distinct code bases. In addition, it is also possible to compile applications using Flutter to target the upcoming Fuchsia operating system from Google.

Recently, Flutter struck a significant milestone-version 1.0 stable. The release took place at the Flutter Live event in London on December 5, 2018. While it can still be considered as an early and developing software project, this article will concentrate on an already established idea and show how to create a fully functional messaging app that uses Flutter 1.2 and Firebase to target both significant mobile platforms.

As can be seen from the graph below, in the latest months, Flutter has gained a lot of customers. In 2018, the market share of Flutter increased, and in terms of search queries, it is on track to surpass React Native, hence our choice to produce a fresh tutorial for Flutter.

Essentials

Although efforts have been made to enable readers to follow and achieve this project even though it is their first attempt at mobile development, many key mobile development ideas that are not specific to Flutter are discussed and used without comprehensive explanation.

This was done for brevity of paper as one of its goals is for the reader to finish the project in a single session. Finally, the paper assumes that your development environment, including the necessary Android Studio plugins and the Flutter SDK, has already been set up.

Firebase Set Up

The only thing we have to do separately for each platform is to set up Firebase. First, ensure that you develop a fresh project in the Firebase Dashboard and add Android and iOS apps to the freshly created workspace. The platform produces two settings files you need to download: for Android, google-services.json and for iOS, GoogleService-Info.plist. Before closing the dashboard, ensure that authentication services for Firebase and Google are enabled as we will use them to identify users. To do this, select the menu item Authentication and then select the Method Sign-In tab.

Now, as the remainder of the setup takes place in our codebase, you can close the dashboard. First of all, we need to put in our project the documents that we downloaded. The google-services.json file should be placed in the $(FLUTTER PROJECT ROOT)/android / app folder and the $(FLUTTER PROJECT ROOT)/ios / Runner directory should be placed in GoogleService-Info.plist. Next, we need to set up the Firebase libraries that we will be using in the project and connect them to the settings documents. This is achieved by specifying the packages of Dart (libraries) that we will use in the pubspec.yaml file of our project. Paste the following snippet in the file’s dependencies section:

flutter_bloc:
shared_preferences:
firebase_auth:
cloud_firestore:
google_sign_in:
flutter_facebook_login:

The first two are not Firebase linked but will be used commonly in the project. Hopefully, the last two are self-explanatory.

Finally, we need to configure platform-specific project configurations to allow the successful completion of our authentication flow. On the Android side, we need to add the Gradle plugin to our Gradle setup at project level. In other words, in the $(FLUTTER PROJECT ROOT)/android / build.gradle file, we must add the following item to the dependency list:

classpath 'com.google.gms:google-services:4.2.0' // change 4.2.0 to the latest version

Then we need to add this line to the end of the plugin

$(FLUTTER_PROJECT_ROOT)/android/app/build.gradle:
apply plugin: 'com.google.gms.google-services'

The last thing about this platform is to enlist the parameters of your Facebook application. Editing these two files is what we are looking for here-

$(FLUTTER_PROJECT_ROOT)/android/app/src/main/AndroidManifest.xml and $(FLUTTER_PROJECT_ROOT)/android/app/src/main/res/values/strings.xml:
<!-- AndroidManifest.xml -->
 
<manifest xmlns:android="http://schemas.android.com/apk/res/android>
<!-- … -->
 
    <application>
        <!-- … -->
        <meta-data android:name="com.facebook.sdk.ApplicationId"
   android:value="@string/facebook_app_id"/>
 
        <activity
            android:name="com.facebook.FacebookActivity"
             android:configChanges="keyboard|keyboardHidden|screenLayout|screenSize|orientation"
            android:label="@string/app_name" />
        <activity
            android:name="com.facebook.CustomTabActivity"
            android:exported="true">
                <intent-filter>
                    <action android:name="android.intent.action.VIEW" />
                    <category android:name="android.intent.category.DEFAULT" />
                    <category android:name="android.intent.category.BROWSABLE" />
                    <data android:scheme="@string/fb_login_protocol_scheme" />
                </intent-filter>
        </activity>
 
                                                                           
                                                                           
<!-- … -->
    </application>
</manifest>
 
<!-- strings.xml -->
<resources>
   <string name="app_name">Toptal Chat</string>
   <string name="facebook_app_id">${YOUR_FACEBOOK_APP_ID}</string>
   <string name="fb_login_protocol_scheme">${YOUR_FACEBOOK_URL}</string>
</resources>

Now is iOS time. Fortunately, in this situation, we only need to alter one file. Add the following values to $(FLUTTER PROJECT)ROOT / ios / Runner / Info.plist file (note that CFBundleURLTypes item may already be in the list; in that event, add these items to the current set instead of declaring it again):

<key>CFBundleURLTypes</key>
<array>
  <dict>
     <key>CFBundleURLSchemes</key>
     <array>
        <string>${YOUR_FACEBOOK_URL}</string>
     </array>
  </dict>
  <dict>
     <key>CFBundleTypeRole</key>
     <string>Editor</string>
     <key>CFBundleURLSchemes</key>
     <array>
        <string>${YOUR_REVERSED_GOOGLE_WEB_CLIENT_ID}</string>
     </array>
  </dict>
</array>
<key>FacebookAppID</key>
<string>${YOUR_FACEBOOK_APP_ID}</string>
<key>FacebookDisplayName</key>
<string>${YOUR_FACEBOOK_APP_NAME}</string>
<key>LSApplicationQueriesSchemes</key>
<array>
  <string>fbapi</string>
  <string>fb-messenger-share-api</string>
  <string>fbauth2</string>
  <string>fbshareextension</string>
</array>

A word about the architecture of BLoC

This architecture standard has been defined in one of our past papers, showing the use of BLoC to share code in Flutter and AngularDart, so we’re not going to explain it in detail here.

The fundamental concept behind the primary concept is that each screen has the following classes:-view-which shows the present state and delegates user input as occurrences to block. State-representing “live” information interacting with the customer using the present perspective. Block-responding to occurrences and updating the state accordingly, optionally requesting information from one or several local or remote repositories. The event-a definite outcome of the action that may or may not alter the present state.

It can be believed of as this as a visual depiction:

We also have a model folder containing information classes and repositories producing cases of these classes.

UI Development

Unlike indigenous app creation in Android and iOS where the UI is constructed using the XML system and is totally separated from the business logic codebase, creating UI using Flutter is performed entirely in Dart. We will use comparatively easy compositions of UI elements depending on the present state with distinct parts (e.g. isLoading, isEmpty parameters). The Flutter UI is about widgets, or rather the tree of the widget. Widgets can be stateful or stateless. When it comes to stateful ones, it is important to stress that a build and draw pass is scheduled to be executed on the next drawing cycle when setState) (is called on a particular widget that is currently displayed (calling it in the constructor or after it has been disposed of resulting in a runtime error).

For brevity, only one of the UI classes (view) will be shown here:

class LoginScreen extends StatefulWidget {
 LoginScreen({Key key}) : super(key: key);
 
 @override
 State<StatefulWidget> createState() => _LoginState();
}
 
class _LoginState extends State<LoginScreen> {
 final _bloc = LoginBloc();
 
 @override
 Widget build(BuildContext context) {
   return BlocProvider<LoginBloc>(
     bloc: _bloc,
     child: LoginWidget(widget: widget, widgetState: this)
   );
 }
 
 @override
 void dispose() {
   _bloc.dispose();
   super.dispose();
 }
}
 
class LoginWidget extends StatelessWidget {
 const LoginWidget({Key key, @required this.widget, @required this.widgetState}) : super(key: key);
 
 final LoginScreen widget;
 final _LoginState widgetState;
 
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     appBar: AppBar(
       title: Text("Login"),
     ),
     body: BlocBuilder(
         bloc: BlocProvider.of<LoginBloc>(context),
         builder: (context, LoginState state) {
           if (state.loading) {
             return Center(
                 child: CircularProgressIndicator(strokeWidth: 4.0)
             );
           } else {
             return Center(
               child: Column(
                 mainAxisAlignment: MainAxisAlignment.center,
                 crossAxisAlignment: CrossAxisAlignment.center,
                 children: <Widget>[
                   ButtonTheme(
                     minWidth: 256.0,
                     height: 32.0,
                     child: RaisedButton(
                       onPressed: () => BlocProvider.of<LoginBloc>(context).onLoginGoogle(this),
                       child: Text(
                         "Login with Google",
                         style: TextStyle(color: Colors.white),
                       ),
                       color: Colors.redAccent,
                     ),
                   ),
                   ButtonTheme(
                     minWidth: 256.0,
                     height: 32.0,
                     child: RaisedButton(
                       onPressed: () => BlocProvider.of<LoginBloc>(context).onLoginFacebook(this),
                       child: Text(
                         "Login with Facebook",
                         style: TextStyle(color: Colors.white),
                       ),
                       color: Colors.blueAccent,
                     ),
                   ),
                 ],
               ),
             );
           }
         }),
   );
 }
 
 void navigateToMain() {
     NavigationHelper.navigateToMain(widgetState.context);
 }
}

The remainder of the UI classes follow the same models but may have distinct behavior and may have an empty status widget tree as well as loading status.

Authentication

We will use google sign in and flutter facebook login libraries, as you might have guessed, to authenticate the user by relying on their profile on the social network. First, ensure that these packages are imported into the file that will manage the login request logic:

import 'package:flutter_facebook_login/flutter_facebook_login.dart';
import 'package:google_sign_in/google_sign_in.dart';

Now, we will have two separate components that will take care of our authentication flow. The first will initiate either a sign-in application from Facebook or Google:

void onLoginGoogle(LoginWidget view) async {
    dispatch(LoginEventInProgress());
    final googleSignInRepo = GoogleSignIn(signInOption: SignInOption.standard, scopes: ["profile", "email"]);
    final account = await googleSignInRepo.signIn();
    if (account != null) {
        LoginRepo.getInstance().signInWithGoogle(account);
    } else {
        dispatch(LogoutEvent());
    }
}
 
void onLoginFacebook(LoginWidget view) async {
    dispatch(LoginEventInProgress());
    final facebookSignInRepo = FacebookLogin();
    final signInResult = await facebookSignInRepo.logInWithReadPermissions(["email"]);
    if (signInResult.status == FacebookLoginStatus.loggedIn) {
        LoginRepo.getInstance().signInWithFacebook(signInResult);
    } else if (signInResult.status == FacebookLoginStatus.cancelledByUser) {
        dispatch(LogoutEvent());
    } else {
        dispatch(LoginErrorEvent(signInResult.errorMessage));
    }
}

The second one will be called when either supplier receives the profile information. We will do this by instructing our login handler to listen to the firebase auth flow onAuthStateChange:

void _setupAuthStateListener(LoginWidget view) {
 if (_authStateListener == null) {
   _authStateListener = FirebaseAuth.instance.onAuthStateChanged.listen((user) {
     if (user != null) {
       final loginProvider = user.providerId;
       UserRepo.getInstance().setCurrentUser(User.fromFirebaseUser(user));
       if (loginProvider == "google") {
         // TODO analytics call for google login provider
       } else {
         // TODO analytics call for facebook login provider
       }
       view.navigateToMain();
     } else {
       dispatch(LogoutEvent());
     }
   }, onError: (error) {
     dispatch(LoginErrorEvent(error));
   });
 }
}

Flutter Tutorial: How to create an app for instant messaging

We’re finally getting to the exciting portion. The messages should be exchanged as quickly as possible, as the name indicates, ideally this should be instantaneous. Fortunately, cloud firestore enables us to communicate with Firestore example and we can use its snapshots (function to open a data stream that will provide us with real-time updates. )

In my view, with the exception of the startChatroomForUsers technique, all the chat repo code is quite simple. It is responsible for creating a new chat space for two users unless there is a current one with both users (because we don’t want numerous cases of the same user pair) in which case it returns the existing chat room.

However, it presently does not support nested array-contains queries due to Firestore’s layout. So we can’t get the right data stream, but on our side, we need to do extra filtering. This alternative comprises of finding all the chatrooms for the user logged in and then looking for the one that also includes the user chosen:

Future<SelectedChatroom> startChatroomForUsers(List<User> users) async {
 DocumentReference userRef = _firestore
     .collection(FirestorePaths.USERS_COLLECTION)
     .document(users[1].uid);
 QuerySnapshot queryResults = await _firestore
     .collection(FirestorePaths.CHATROOMS_COLLECTION)
     .where("participants", arrayContains: userRef)
     .getDocuments();
 DocumentReference otherUserRef = _firestore
     .collection(FirestorePaths.USERS_COLLECTION)
     .document(users[0].uid);
 DocumentSnapshot roomSnapshot = queryResults.documents.firstWhere((room) {
   return room.data["participants"].contains(otherUserRef);
 }, orElse: () => null);
 if (roomSnapshot != null) {
   return SelectedChatroom(roomSnapshot.documentID, users[0].displayName);
 } else {
   Map<String, dynamic> chatroomMap = Map<String, dynamic>();
   chatroomMap["messages"] = List<String>(0);
   List<DocumentReference> participants = List<DocumentReference>(2);
   participants[0] = otherUserRef;
   participants[1] = userRef;
   chatroomMap["participants"] = participants;
   DocumentReference reference = await _firestore
       .collection(FirestorePaths.CHATROOMS_COLLECTION)
       .add(chatroomMap);
   DocumentSnapshot chatroomSnapshot = await reference.get();
   return SelectedChatroom(chatroomSnapshot.documentID, users[0].displayName);
 }
}

Firebase also fails to help array updates (inserting a fresh element in a current array field value) with a special FieldValue.server timestamp) (value owing to comparable design limitations.

This value indicates to the platform that at the moment the transaction takes place, the field containing this instead of an actual value should be filled in with the actual time mark on the server. Instead, at the time we are creating our fresh message serialized object, we are using DateTime.now) (and inserting that object into the set of chat room texts.

Future<bool> sendMessageToChatroom(String chatroomId, User user, String message) async {
 try {
   DocumentReference authorRef = _firestore.collection(FirestorePaths.USERS_COLLECTION).document(user.uid);
   DocumentReference chatroomRef = _firestore.collection(FirestorePaths.CHATROOMS_COLLECTION).document(chatroomId);
   Map<String, dynamic> serializedMessage = {
     "author" : authorRef,
     "timestamp" : DateTime.now(),
     "value" : message
   };
   chatroomRef.updateData({
     "messages" : FieldValue.arrayUnion([serializedMessage])
   });
   return true;
 } catch (e) {
   print(e.toString());
   return false;
 }
}

Wrapping Up

Obviously, the Flutter messaging app we have created is more of a proof of concept than an instant messaging application ready for the market. One might consider introducing end-to-end encryption or wealthy content (group chats, media attachments, parsing of URLs) as ideas for further growth. But first of all, one should implement push notifications as they are almost a must-have feature for an instant messaging application, and for the sake of brevity, we have moved it out of the scope of this article. In addition, Firestore still lacks a few characteristics to have data-like nested array-containing queries that are easier and more precise.

As stated at the beginning of the article, Flutter has only lately developed into a stable 1.0 release and will continue to grow, not only in terms of framework characteristics and capacities but also in terms of development society and third-party libraries and resources. It makes sense to invest your time in getting to know the growth of the lutter app now, as staying and speeding up your mobile development process is obviously here.

Obviously, in today’s scenario, the requirement of coders is very high and so Codersera has taken an initiative of providing the best coders who have very high coding experience.

Hire Coders Now!