Introduction
I know you already understand the benefits of using Docker in development, so let's skip the usual details. But I do want to point out that dockerizing an app with docker-compose is much more beneficial than not using docker-compose. Simply because docker-compose file lets you run the entire app on docker by a single command on terminal. I guess you already know this one too, but I mentioned it in case some of you might not know this. Okay, now let’s get to the point.
Prerequisites
You need to be familiar with every technology mentioned on the title. If you are not able to manage to dockerize a PERN stack app, only then continue reading. Since this explanation is not going to teach you docker or PERN stack development. Otherwise, you will be confused in different stages (BTW, I am thankful to you as you have already given a view to this article).
Before proceeding forward, note that the following is going to be the ultimate project structure:
pern-project/
--- frontend/
------ .dockerignore
------ frontend.dockerfile
------ ...
--- backend/
------ .dockerignore
------ backend.dockerfile
------ ...
--- docker-compose.yml
Code Backend
You can start with the frontend instead. But writing the backend code first makes it easier to proceed in my opinion. That’s why I am starting with the backend. So, let’s begin.
Let’s create a node.js-express.js app first:
mkdir backend && cd backend
npm init -y
This will create a backend folder and package.json file in it. Make sure you have backend and frontend in a single project folder. Now, install all the necessary dependencies from npm:
npm install express dotenv cors
Since we are using typescript, install typescript related dev dependencies as well:
npm install --save-dev typescript ts-node @types/express @types/node @types/cors
here, ts-node
module is going to help us to run the backend. Since you have typescript
, run the following command to generate tsconfig.json
file:
tsc --init
This will create a tsconfig.json
file in your backend folder. Replace everything inside tsconfig.json
with this piece of code in the following:
{
"compilerOptions": {
"target": "es2016",
"module": "commonjs",
"rootDir": "./src",
"outDir": "./dist",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true
},
"include": ["src/**/*.ts", "data-types.d.ts"]
}
Okay, now create an src
folder inside the backend and create an index.ts
file inside the src
folder. If you do everything according to what I said, then the folder structure is going to look like this:
backend/
--- node_modules/
--- src/
------ index.ts
--- package.json
--- tsconfig.json
--- ...
After that, write a simple boilerplate code in index.ts
file for the backend. The code will look like this:
import express, { Request, Response } from "express"
import "dotenv/config"
import cors from "cors"
import { corsOptions } from "./constants/config"
const app = express()
const PORT = process.env.PORT || 3000
app.use(express.json())
app.use(express.urlencoded({ extended: true }))
app.use(cors({
const corsOptions = {
origin: process.env.CLIENT_URL || 'http://localhost:5173',
credentials: true,
}
}))
app.get("/", (req: Request, res: Response) => {
res.json({
message: "Hello, TypeScript with Express! Updated!",
})
})
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`)
})
So, this is a basic boilerplate code for our backend. Now, you can test it on your local machine. Hopefully, you are not going to get any issues.
Integrate Postgres With Prisma
Since this is a PERN stack application, we need to integrate Postgresql database into the backend app. In this integration we are going to use Prisma ORM to communicate with the database. So, install Prisma and Prisma Client first:
npm i @prisma/client
npm i --save-dev prisma
If you are done installing these dependencies or modules, run the following command to generate a prisma folder where you will get a basic schema-structure:
npx prisma init
After running this command, you’ll see a folder called prisma
and a .env
file created in the root directory of the backend. Now, we need to write a User
model in the schema.prisma
file inside the prisma
folder because our goal is to save data in the database. The code should look like this:
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id String @id @default(uuid())
name String
username String
email String
password String
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@map("users")
}
DATABASE_URL
is an env variable which you need to define in the .env
file. The .env
file has been created when you ran npx prisma init
command. So, let’s go inside the .env
file and change the url
link to this one:
DATABASE_URL=postgresql://postgres:postgres@db:5432/pern_db?schema=public
As you can see, both the username and password are set to postgres
, but the host name is db
. I’ll set the database service’s name to db
in the Docker Compose file. If this seems a bit confusing, don’t worry—everything about the DATABASE_URL
will make sense when you read the Docker Compose section.
Now let’s set up a singleton pattern code for the Prisma Client, ensuring that there’s only one instance of the Prisma Client during development, even when the application is reloaded. You can safely skip this part. But I prefer to keep it as this setup prevents multiple instances of Prisma Client from being created, which could lead to connection issues with the database.
Create a file prismadb.ts
in the src
folder of your backend folder which should look like this:
import { PrismaClient } from "@prisma/client"
import "dotenv/config"
// Extend the global object with PrismaClient
declare global {
var prisma: PrismaClient | undefined
}
// Prevent multiple instances of Prisma Client in development
const prisma = global.prisma || new PrismaClient()
if (process.env.NODE_ENV !== "production") global.prisma = prisma
export default prisma
Now, let's create an endpoint that saves a user to the database. This code will be placed in the index.ts
file, and the endpoint will be defined at /register
. Here’s a reference of where and how it should be placed:
import prisma from "./prismadb"
......
.....
app.get("/", (req: Request, res: Response) => {
res.json({
message: "Hello, TypeScript with Express! Updated!",
client_url: "http://client:5173",
})
})
app.post("/register", async (req: Request, res: Response) => {
const { name, username, email, password } = req.body
await prisma.user.create({
data: {
name,
username,
email,
password,
},
})
res.json({
message: "User created successfully",
})
})
......
....
So, we have done a major task till now. Coding the backend is almost done. Now let’s add a dockerfile in the backend.
Backend Dockerfile
Now, create a file naming backend.dockerfile
in the root of the backend directory and write:
FROM node:20
WORKDIR /app
COPY package*.json .
RUN npm install
COPY prisma ./prisma
RUN npx prisma generate
COPY . .
EXPOSE 3000
RUN npm install -g nodemon ts-node
CMD ["nodemon", "src/index.ts"]
RUN npx prisma generate
command generates Prisma Client code based on your schema.prisma
file, allowing you to interact with the database using Prisma and RUN npm install -g nodemon ts-node
installs nodemon
and ts-node
globally in the Docker container.
The COPY . .
command copies everything from the project folder into the Docker image. Since npm install
will install all the necessary dependencies, we don’t need to include the node_modules
folder in the image. To exclude it, create a .dockerignore
file in the root of the backend folder and add node_modules
to it.
node_modules
Code Frontend
Now, it’s time to code the frontend. Simply generate the frontend using Vite with this command:
npm create vite@latest frontend -- --template react-ts
This command creates a react project with typescript. Here the client folder’s name is frontend
.
Call the GET
method endpoint in useEffect()
hook, so that we can test if the API request works well. Then, add a form in the component to register user. Here is how the code looks like:
// App.tsx
import { FormEvent, useEffect, useState } from "react"
....
function App() {
const [name, setName] = useState("")
const [username, setUsername] = useState("")
const [email, setEmail] = useState("")
const [password, setPassword] = useState("")
const saveUser = async (e: FormEvent) => {
e.preventDefault()
await fetch("http://localhost:3000/register", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
name,
username,
email,
password,
}),
})
.then((res) => res.json())
.then((data) => console.log(data))
}
useEffect(() => {
fetch("http://localhost:3000")
.then((res) => res.json())
.then((data) => console.log(data))
}, [])
return (
<>
<form onSubmit={saveUser}>
<input
type="text"
onChange={(e) => setName(e.target.value)}
placeholder="Enter your name"
/>
<input
type="text"
onChange={(e) => setUsername(e.target.value)}
placeholder="Enter your username"
/>
<input
type="email"
onChange={(e) => setEmail(e.target.value)}
placeholder="Enter your email"
/>
<input
type="password"
onChange={(e) => setPassword(e.target.value)}
placeholder="Enter your password"
/>
<button type="submit">Submit</button>
</form>
</>
)
}
export default App
That’s all you need in the frontend.
Frontend Dockerfile
Now, create a file naming frontend.dockerfile
in the root of the frontend directory and write:
FROM node:20
WORKDIR /app
COPY package*.json .
RUN npm install
EXPOSE 5173
COPY . .
CMD [ "npm", "run", "dev" ]
This is similar to the backend Dockerfile. Here, we are exposing port 5173
because Vite’s default port is 5173
.
Now, create a .dockerignore
file in the root of the frontend directory and include node_modules
, for the same reasons as the backend’s .dockerignore
.
At this point, both the backend and frontend are set up properly. Now, it’s time to write the docker-compose
file.
Docker-Compose file
Remember the project structure I mentioned at the beginning of this article. With that in mind, create a docker-compose.yml
file in the root of the pern-project
folder.
We are going to run 3 containers on a single network: frontend, backend, and PostgreSQL. For the frontend and backend, we’ll build images using their respective Dockerfiles. As for PostgreSQL, when we define the PostgreSQL service in the docker-compose
file, it will automatically pull the PostgreSQL image from Docker Hub and run a container from it.
So, let’s begin with the PostgreSQL Database service.
Postgres Database Service
Open the docker-compose.yml
file and write db
service:
services:
db:
container_name: db
image: postgres:14
restart: always
ports:
- "5432:5432"
env_file:
- ./backend/.env
volumes:
- pgdata:/data/db
volumes:
pgdata: {}
I have set the image to postgres:14
and defined a volume for the database, so all database data will be stored in this volume path. To run the container, we need to set environment variables like POSTGRES_USER
, POSTGRES_PASSWORD
, and POSTGRES_DB
. I have stored these variables inside the backend’s .env
file and referenced this file in the env_file
field of the service. This allows the db
service to access the necessary environment variables from the local .env
file.
// backend/.env
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=pern_db
DATABASE_URL=postgresql://postgres:postgres@db:5432/pern_db?schema=public
Now, you can relate the DATABASE_URL
to the postgres variables.
Write & Run All Services
Now that you understand how the db
service is defined in the docker-compose
file, it’s time to add the frontend and backend services. To run all the containers on a single network, we need to define a network in the compose file and assign this network to each service.
Here is the final look of the docker-compose.yml
file:
services:
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: frontend.dockerfile
ports:
- "5173:5173"
networks:
- pern_net
volumes:
- ./frontend:/app
- /app/node_modules
depends_on:
- server
backend:
container_name: backend
build:
context: ./backend
dockerfile: backend.dockerfile
env_file:
- ./backend/.env
ports:
- "3000:3000"
networks:
- pern_net
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
db:
container_name: db
image: postgres:14
restart: always
ports:
- "5432:5432"
networks:
- pern_net
env_file:
- ./backend/.env
volumes:
- pgdata:/data/db
networks:
pern_net:
driver: bridge
volumes:
pgdata: {}
I don’t think there’s anything else to clarify here. Just keep in mind that defining volumes this way for both frontend and backend services allows you to refactor and reload the codebase while developing. If you don’t need live reloading, you can skip defining the volumes
field in both services.
If you have come to this far, you are good to run the docker-compose command. So, open the terminal in the pern-project
folder and run this command:
docker compose up -d
After waiting for a little while, you will see all the containers are running properly. Here, -d
flag makes sure that you can run any command on the same terminal while the containers are running in detached mode.
Lastly, there is another task left to do. And it is migrating the prisma schema with the following command:
docker backend -it backend npx prisma migrate dev --name init
Now, you have everything set up and you are good to test it out.
Conclusion
In conclusion, using Docker Compose simplifies the development process by allowing you to run your entire application with just a single command, making everything hassle-free. Nowadays, the PERN stack is gaining popularity, especially since many employers and founders are showing more interest in PostgreSQL as a database alongside Node.js. Dockerizing a PERN stack application makes it much easier to develop, test, and run the app across different computers or machines, ensuring a consistent environment everywhere.