How to setup a powerful API with GraphQL, Koa and MongoDB -- deploying to production

Our GraphQL runs smoothly locally, but what if we want to share it with the world?

In order to make our graphQL api available to the public, we’ll need to deploy it on a production server. I chose Heroku for it’s simplicity.

Head over to Heroku, create a user if you haven’t, and create a new project. We don’t have to pay anything for our demo.

Head over to the deploy tab and sync the Heroku with Github. Easiest way is to deploy via our Github repository.

And finally, add the mlab Heroku add-on.

Heroku Dashboard

pm2 set-up

Next, let’s get our pm2 ready for production. Since Heroku runs our scripts , we’ll need to add pm2 to our package.json.

How to setup a powerful API with GraphQL, Koa and MongoDB -- scalability and testing

Welcome to our part III series where we set up a powerful API. So far, we have achieved basic CRUD functionality.

As our app grows, so does our mutation count. In order to have as clean codebase as we can, we should extract the mutations to dedicated files. This way we can assure our code is modular and separated into maintainable chunks.

Let’s create a folder graphql/mutations and inside the folder create addGadget.js, updateGadget, and removeGadget files..

We simply place the mutation objects into the files and export them.

Moving addGadget mutation to separate file

graphql/mutations/addGadget.js

const { GraphQLObjectType, GraphQLString } = require('graphql');
const gadgetGraphQLType =  require('./../gadgetType');
const Gadget = require('./../../models/gadget');

module.exports = {
  type: gadgetGraphQLType,
  args: {
    name: { type: GraphQLString },
    release_date: { type: GraphQLString },
    by_company: { type: GraphQLString },
    price: { type: GraphQLString }
  },
  resolve(parent, args) {
    const newGadget = new Gadget({
      name: args.name,
      release_date: args.release_date,
      by_company: args.by_company,
      price: args.price,
    })

    return newGadget.save();
  }
};

How to setup a powerful API with GraphQL, Koa and MongoDB -- CRUD

This is a series where we learn how to set-up a powerful API with GraphQL, Koa and Mongo. The primary focus will be on GraphQL. Check out part I if you haven’t yet.

Mutations

So far we can read our data, but there’s a big chance we need to edit our data records/documents. Any complete data platform needs a way to modify server-side data as well.

Okay, imagine this– a company launched a new gadget. How would we go on about adding the record to our database with GraphQL?

Mutations to the rescue!

Think of Mutations like POST or PUT REST actions. Setting up a mutation is quite straight-forward.

Let’s jump in!

How to setup a powerful API with GraphQL, Koa and MongoDB

Building API’s is super fun! Especially when you can leverage modern technologies such as Koa, GraphQL and MongoDB.

Koa is a Node framework, just like Express is a Node framework. We’ll replace Express with Koa since Koa uses async/await syntax over callbacks.

Koa Github repository


Express Github repository


Getting started

The prerequisites for building our API are the following;

  • Node installed
  • Text Editor; I pick Visual Studio Code
  • Terminal
  • Browser

How to finish setting up your powerful API with Nodejs, GraphQL, MongoDB, Hapi, and Swagger (Part II)

I know, I left you hanging just at the exciting parts — implementing GraphQL!

What is graphQL anyway, and why is it so popular right now?

“GraphQL’s power comes from a simple idea — instead of defining the structure of responses on the server, the flexibility is given to the client. Each request specifies what fields and relationships it wants to get back, and GraphQL will construct a response tailored for this particular request. The benefit: only one round-trip is needed to fetch all the complex data that might otherwise span multiple REST endpoints, and at the same time only return the data that are actually needed and nothing more.” Source

GraphQL solves many pain points traditional REST APIs might face. Some of them are:

  • Over-fetching — there is data in the response you don’t use.
  • Under-fetching — you don’t have enough data with a call to an endpoint, leading you to call a second endpoint.

Check out this StackOverflow post explaining the two scenarios.

GraphQL has gotten so popular in part because people have good reason to believe it will replace REST entirely — just like REST replaced SOAP.

All the REST folks be like (just a joke, I love both REST and GraphQL ❤)

Getting started with GraphQL

First we need to install the appropriate dependencies.

Graphql is the main package for graphql and apollo-server-hapi is the glue between our Hapi server and GraphQL.

Let’s create a new folder called graphql and inside a file called PaintingType.js

Let’s go through from top to bottom:

We require the GraphQL library.

At line 3 we’re deconstructing objects from GraphQL.

const { GraphQLObjectType, GraphQLString } = graphql

Is the same as:

const GraphQLObjectType = graphql.GraphQLObjectType
const GraphQLString = graphql.GraphQLString

Check out this article about deconstruction.

Next up, we create a new GraphQLObjectType

Almost all of the GraphQL types you define will be object types. Object types have a name, but most importantly describe their fields.

Now as you’ve likely noticed, GraphQL is a statically typed language — which means we have to declare all types for our fields. For now our field types are all the type GraphQLString

This was our query for the paintings. Now we need to hook it up to our root query which the server will serve and from where it will fetch all data.

https://github.com/apollographql/graphql-guide/blob/master/source/schemas.md

Let’s create a file called schema.js inside our GraphQL folder.

This is our root query which we will serve to the server.

Notice our fields section is more convoluted now — we are passing the name of the field with the type PaintingType and args field. Let me ask you this: how would we find a specific painting? We need some kind of argument to sort by, in this case it would be the id

Next we have the resolve function which has two parameters, parent and args

Just to illustrate, GraphQL queries look like the following

graphql query visualization

The painting query is from PaintingType.js — notice how we pass an argument, that’s the args parameter in the resolve() — and the parent would be used in more complex queries where you have more nesting going on.

Let’s export our root query and pass it to the Hapi server. Notice the type GraphQLSchema — this is the root query/schema definition we pass to the server.

exporting the graphql schema

Going back to our index.js — we require GraphQL packages and the schema.js

Next up we need to register the hapi-graphql plugin.

Inside the server.register({}) we pass our GraphQL configuration.

registering our graphiql plugin

Fairly simple, eh? We installed the graphiql plugin. Notice it’s graphiql not graphql. Graphiql is the in-browser IDE for exploring GraphQL.

Next let’s register a new plugin: the graphqlHapi which includes the schema we made earlier.

Now, if we head over to http://localhost:4000/graphiql

graphiql interface

Woohoo, it’s works!

Writing our first query

But why do we get a null response? Well, two reasons.

  • We probably don’t have a painting with an id of 2.
  • Secondly, even if we did have a painting with id of 2, we’re still not fetching it from our MongoDB. Remember the resolve function we left empty? Yup, that’s were we will implement data fetching from the database.

Let’s implement it!

Quick change for our model: let’s change technique to just a string instead of an array of strings.

Quick changes to techniques, rename to technique and type String. Basically change techniques to technique (singular) — sorry!

Make a new request with postman with the technique field changed. Check the first article if you forgot :-)

Now we go back to our Graphiql. (By the way, check your mLab document for the appropriate id. The id has to be matching in order to get a 200 response.)

Painting data being returned from mongoDB

Works like a charm! An excellent feature of graphiql is that it enables API documentation out of the box.

graphiql api documentation


Finishing touches with swagger

According to Swagger’s site,

Swagger offers the most powerful and easiest to use tools to take full advantage of the OpenAPI Specification.

Let’s install the dependencies.

Now we register the plugin.

And final thing we need to do is add descriptions and tags to our routes.

All finished! Head over http://localhost:4000/documentation

Swagger

Awesome, now we have a self-documenting API which we just pass to our team.

There is so much more we can do. GraphQL mutations, a frontend to consume our API, refactoring our server side code, and so on. Let me know in the comments if it’s worth the time to write part III :)

Thanks for making it this far, hope you learned a lot and have fun with your new API! ❤

Note: I love to chat on Twitter!

How to set-up a powerful API with Nodejs, GraphQL, MongoDB, Hapi, and Swagger

Separating your frontend and backend has many advantages:

  • The biggest reason why reusable APIs are popular — APIs allow you to consume data from a web client, mobile app, desktop app — any client really.
  • Separation of concerns. Long gone are the days where you have one monolithic-like app where everything is bundled together. Imagine you have an extremely convoluted application. Your only option is to hire extremely experienced/senior developers due to the natural complexity.

I’m all for hiring juniors and training your staff, and that’s exactly why you should separate concerns. With separation of concerns, you can reduce the complexity of your application by splitting responsibilities into “micro-services” where each team is specialized in their micro-service.

As mentioned above, the on-boarding/ramp-up process is much quicker thanks to splitting up responsibilities (backend team, frontend team, dev ops team, and so on)

Maurice Moss from “IT Crowd”

Forward thinking and getting started

We will be building a very powerful, yet flexible, GraphQL API based on Nodejs with Swagger documentation powered by MongoDB.

The main backbone of our API will be Hapi.js. We will go over all the technology in substantial detail.

At the very end, we will have a very powerful GraphQL API with great documentation.

The cherry on top will be our integration with the client (React, Vue, Angular)

Prerequisites

  • NodeJS installed
  • Basic JavaScript — if you feel wary, check out this article for the best courses to brush up on your JavaScript game
  • Terminal (any will do, preferably bash-based)
  • Text editor (any will do)
  • MongoDB (install instructions here) — Mac: brew install mongodb

Let’s goo!

Open the terminal and create the project. Inside the project directory we initialize a Node project.

Creating our project

Next, we want to setup our Hapi server, so let’s install the dependencies. You can either use Yarn or NPM.

Before we go on, let’s talk about what hapi.js is and what it can do for us.

hapi enables developers to focus on writing reusable application logic instead of spending time building infrastructure.

Instead of going with Express, we are going with Hapi. In a nutshell, Hapi is a Node framework. The reason why I chose Hapi is rather simple — simplicity and flexibility over boilerplate code_._

Hapi enables us to build our API in a very rapid manner.

Optional: check out this quick crash course on hapi.js:

The second dependency we installed was the good-ole nodemon. Nodemon restarts our server automatically whenever we make changes. It speeds up our development by a big factor.

Let’s open our project with a text editor. I chose Visual Studio Code.

Setting up a Hapi server is very straightforward. Create a index.js file at the root directory with the contents of the following:

  • We require the hapi dependency
  • Secondly, we make a constant called server which creates a new instance of our Hapi server — as the arguments, we pass an object with the port and host options.
  • Third and finally, we create an asynchronous expression called init. Inside the init method, we have another asynchronous method which starts the server. See server.start() — at the bottom we call the init() function.

If you’re unsure about async await — watch this:

funfunfunction async await explanation

Now, if we head over to http://localhost:4000 we should see the following:

Which is perfectly fine, since the Hapi server expects a route and a handler. More on that in a second.

Let’s quickly add the script to run our server with nodemon. Open package.json and edit the scripts section.

Now we can do the following 😎

Routing

Routing is very intuitive with Hapi. Let’s say you hit / — what would you expect to happen? There are three main components in play here.

  • What’s the path? — path
  • What’s the HTTP method? Is it a GET — POST or something else? — method
  • What will happen if that route is reached? — handler

Inside the init method we attached a new method to our server called route with options passed as our argument.

If we refresh our page we should see return value of our root handler

Well done, but there is so much more we can do!

Setting up our database

Right, next up we are going to setup our database. We’re going to use mongodb with mongoose.

Let’s face it, writing MongoDB validation, casting and business logic boilerplate is a drag. That’s why we wrote Mongoose.

The next final ingredient related to our database is mlab. Instead of running mongo on our local computer, we are gonna use a cloud provider like mlab.

The reason why I chose mlab is because of the free plan (useful for prototyping) and how simple it is to use. There are more alternatives out there, and I encourage you to explore all of them ❤

mlab (free)

Head over to https://mlab.com/ and signup.

Let’s create our database.

And finally create a user for the database. That will be all we will be editing on mlab.

Connecting mongoose with mlab

Open index.js and add the following lines and credentials. We are basically just telling mongoose which database we want to connect. Make sure to use your credentials.

If you want to brush up your MongoDB skills, here’s a solid series.

If everything went according the plan, we should see ‘connected to database’ in the console.

Wohoo!

Good job! Take a quick break and grab some coffee, we are almost ready to dive into the “cool parts”.

Creating Models

With mongoDB, we follow the convention of models. In other words — data modeling.

https://docs.mongodb.com/manual/core/data-modeling-introduction/

It’s a relatively simple concept which you will be able to grasp. Basically we just declare our schema for collections. Think of collections as tables in an SQL database.

Let’s create a directory called models. Inside we will create a file Painting.js

Painting.js is our painting model. It will hold all data related to paintings. Here’s how it will look:

  • We require the mongoose dependency.
  • We declare our PaintingSchema by calling the mongoose schema constructor and passing in the options. Notice how it’s strongly typed: for example the name field can consist of a string, and techniques consists of an array of strings.
  • We export the model and name it Painting

Let’s fetch all of our paintings from the database

First we need to import the Painting model to index.js

Adding new routes

Ideally, we want to have URL endpoints reflecting our actions.

such as /api/v1/paintings — /api/v1/paintings/{id} — and so on.

Let’s start off with a GET and POST route. GET fetches all the paintings and POST adds a new painting.

Notice we modified the route to be an array of objects instead a single object. Also, arrow functions 😊

  • We created a GET for /api/v1/paintings path. Inside the handler we are calling the mongoose schema. Mongoose has built-in methods — the handy method we are using is find() which returns all paintings since we’re not passing in any conditions to find by. Therefore it returns all records.
  • We also created a POST for the same path. The reason for that is we’re following REST conventions. Let’s deconstruct (pun intended) the route handler — remember in our Painting schema we declared three fields: name — url — techniques
    Here we are just accepting those arguments from the request (we will be doing that with postman in a sec) and passing the request arguments to our mongoose schema. After we’re done passing arguments, we call the save() method on our new record, which saves it to the mlab database.

If we head over to http://localhost:4000/api/v1/paintings we should see an empty array.

Why empty? Well we haven’t added any paintings just yet. Let’s do that now!

https://www.getpostman.com/ (free)

Install postman, it’s available for all platforms.

After installation, open postman.

  • On the left you can see the method options. Change that to POST
  • Next to the POST method we have the URL. That’s the URL we want to send our method to.
  • On the right you can see blue button which sends the request.
  • Below the URL bar we have the options. Click on the body and fill in the fields like in the example.
{
  "name": "Mona Lisa",
  "url": "https://en.wikipedia.org/wiki/Mona_Lisa#/media/File:Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg",
  "techniques": ["Portrait"]
}

Sample data

POST paintings

Alright. Good to go! Let’s open http://localhost:4000/api/v1/paintings

GET paintings

Excellent! We still have some way to go! Next up — GraphQL!

Here’s the source code just in case anyone needs it :-)


If you found this meaningful, check out my Twitter — I share useful content there.

Chapter II here

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×