Docker has a reputation for having a steep learning curve, and some of that is earned. The documentation is thorough but dense, and it's easy to end up with a working setup you don't really understand. This guide takes the opposite approach: explain the concepts first, then the commands, then the practical patterns you'll actually use.
The Core Idea (Really Simply)
A Docker container is a process running in an isolated environment. It has its own filesystem, its own network interface, and its own process space. From inside the container, it looks like a dedicated machine. From outside, it's just a process managed by Docker.
The reason this is useful: the container brings its own dependencies. If your application needs Node.js 20, Python 3.11, and a specific version of a system library, all of that is packaged with the container. You don't install dependencies on the host machine; they live inside the container's filesystem. This is why "it works on my machine" largely goes away—if the container works, it works everywhere Docker runs.
An image is the blueprint. A container is a running instance of an image. The same image can run as hundreds of identical containers simultaneously. Images are stored in registries—Docker Hub is the public one, but you can run private registries too.
Your First Container
The canonical first command:
docker run hello-world
Docker pulls the hello-world image from Docker Hub, creates a container from it, runs it, and prints a message. Not exciting, but it confirms Docker is working.
Something more useful—run a PostgreSQL database without installing PostgreSQL:
docker run -d --name mydb -e POSTGRES_PASSWORD=secret -p 5432:5432 postgres:16
Breaking this down: -d runs it in the background (detached). --name gives it a memorable name. -e sets an environment variable (the password). -p 5432:5432 maps port 5432 on your machine to port 5432 inside the container. postgres:16 is the image name and tag.
Now you have a running PostgreSQL instance. Connect to it with any PostgreSQL client at localhost:5432. When you're done, docker stop mydb. The database is gone—unless you've set up a volume, which is the next thing to learn.
Volumes: Making Data Persist
Containers are ephemeral by design. When a container stops and is removed, anything written to its filesystem is gone. For a database, that's a problem. Volumes solve it:
docker run -d --name mydb -e POSTGRES_PASSWORD=secret -p 5432:5432 -v pgdata:/var/lib/postgresql/data postgres:16
The -v pgdata:/var/lib/postgresql/data flag creates a named volume called pgdata and mounts it at the path where PostgreSQL stores its data. The volume lives on your host machine and persists across container restarts and removals. docker volume ls shows your volumes.
Writing a Dockerfile
To containerize your own application, you write a Dockerfile—a script that describes how to build the image:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Each line is an instruction. FROM sets the base image. WORKDIR sets the working directory inside the container. COPY copies files from your host into the image. RUN executes a command during the build. EXPOSE documents which port the app uses. CMD is what runs when the container starts.
Build it: docker build -t myapp .. Run it: docker run -p 3000:3000 myapp.
Docker Compose: Running Multiple Containers Together
Real applications have multiple services—a web server, a database, maybe a cache. Docker Compose lets you define and run them together in a docker-compose.yml file:
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
docker compose up starts everything. docker compose down stops and removes the containers. The services can communicate with each other using their service names as hostnames—your web service connects to the database at db:5432, not localhost:5432.
Commands You'll Actually Use
docker ps— list running containersdocker ps -a— list all containers including stopped onesdocker logs mycontainer— view container outputdocker exec -it mycontainer sh— get a shell inside a running containerdocker images— list local imagesdocker rm mycontainer— remove a containerdocker rmi myimage— remove an imagedocker system prune— clean up stopped containers, unused images, and networks
The docker exec -it command is particularly useful for debugging—it lets you get inside a running container and poke around as if it were a real machine.
Docker's learning curve flattens quickly once you've worked through these fundamentals. The core mental model—images as blueprints, containers as running instances, volumes for persistence, Compose for multi-service setups—covers the majority of practical use cases. The rest is details.
