Report this

What is the reason for this report?

How To Install and Use Docker on Rocky Linux

Updated on April 9, 2026

Not using Rocky Linux 8?
Choose a different version or distribution.
Rocky Linux 8
How To Install and Use Docker on Rocky Linux

Introduction

Docker is an application that makes it simple and easy to run application processes in containers. Containers are similar to lightweight environments that package an application and its dependencies together. Unlike virtual machines, containers share the host operating system’s kernel, making them more portable and resource-efficient.

For a detailed introduction to the different components of Docker, check out our article on The Docker Ecosystem.

In this tutorial, you’ll install and use Docker Engine on Rocky Linux. You’ll explore why Docker remains a popular choice despite Rocky Linux’s default Podman runtime, configure Docker permissions, and work with containers and images. You’ll also use Docker Compose to manage multi-container applications, create custom images by committing container changes, and push images to Docker Hub for distribution.

Key Takeaways:

  • Rocky Linux includes Podman by default, but Docker remains popular due to its mature ecosystem, Docker Compose integration, and widespread industry adoption with better third-party tool support.
  • Install Docker from the official repository, not Rocky Linux’s default repos, to get the latest version. Rocky Linux uses RHEL-compatible Docker repositories since it’s binary-compatible with Red Hat Enterprise Linux.
  • Docker group membership grants root-equivalent privileges on the host system. Members can mount sensitive directories, create privileged containers, and access protected files. Only add trusted users to this group.
  • Docker uses a daemon-based architecture where dockerd runs as root and manages all containers, while Podman is daemonless with each container running as a direct child process.
  • Docker Compose is included as a plugin with modern Docker installations. Use docker compose (with a space) as a subcommand, not the older docker-compose (with a hyphen) binary.
  • Multi-container applications can be defined in a single YAML file using Docker Compose, which automatically creates networks allowing services to communicate using service names as hostnames.
  • Container changes are ephemeral. Modifications made inside a running container are lost when the container is removed unless you commit them to a new image using docker commit.
  • Dockerfiles are preferred over docker commit for production workflows because they provide repeatable, version-controlled, and documented image builds.
  • Docker Hub serves as the default registry for pulling and pushing images. You need an account to push images, and proper tagging with username/image-name format is required.
  • Security best practices vary by environment. Add users to the docker group freely on personal development machines, be selective on shared systems, and avoid it entirely in production by using orchestration tools with service accounts instead.

Prerequisites

  • A Rocky Linux server with a non-root user with sudo privileges set up using the Initial Setup Guide for Rocky Linux. This guide explains how to set up users and grant them sudo access.

All the commands in this tutorial should be run as a non-root user. If root access is required for a command, it will be preceded by sudo.

Docker vs Podman: Why Install Docker?

Rocky Linux includes Podman as the default container runtime, marking a significant shift in Red Hat’s container strategy. Podman is a daemonless container engine designed to run OCI (Open Container Initiative) compliant containers without requiring a central background service. This design choice reflects a focus on security and simplicity, eliminating the need for a privileged daemon process that manages all containers on the system.

Docker, on the other hand, uses a traditional client-server architecture where the Docker daemon (dockerd) runs as a background service with root privileges. The Docker CLI client communicates with this daemon through a REST API to manage containers, images, networks, and volumes. Despite these fundamental architectural differences, both tools are fully capable of running OCI-compatible containers, meaning containers built for one platform will generally run on the other.

Key Differences

Understanding the technical and practical differences between Docker and Podman will help you choose the right tool for your needs:

  • Architecture: Docker uses a daemon-based architecture where a single dockerd process manages all containers on the system. This daemon runs with root privileges and handles container lifecycle management, image pulls, networking, and storage. Podman, by contrast, is daemonless. Each container runs as a direct child process of the command that started it, requiring no background service.

  • Security: Podman supports rootless containers by default, allowing non-privileged users to run containers without requiring root access or sudo. This significantly reduces the attack surface by limiting what a compromised container can access. Docker can also run rootless containers, but this requires additional configuration and is not the default mode of operation.

  • Ecosystem: Docker has broader ecosystem support, including the widely-used Docker Hub registry with millions of pre-built images, Docker Compose for multi-container orchestration, and Docker Desktop for local development on Windows and macOS. Many third-party tools, monitoring solutions, and deployment platforms are built specifically around Docker’s APIs and tooling.

  • Tooling: Numerous CI/CD systems, cloud platforms, and development tools are designed with Docker-first integration. While Podman offers a Docker-compatible CLI and socket emulation, some tools may require additional configuration or may not fully support Podman yet.

  • Compatibility: Docker Compose, while having a Podman equivalent (podman-compose), works seamlessly with Docker out of the box. The Docker API is well-established and widely documented, making troubleshooting and finding solutions easier.

Why Use Docker Instead of Podman?

You may choose Docker over Podman if you:

  • Need Docker Compose for defining and running multi-container applications. While podman-compose exists, Docker Compose remains the industry standard with better documentation, wider adoption, and more reliable third-party integration.
  • Use tools or workflows that depend on Docker. Many development tools, CI/CD pipelines, and deployment platforms are built specifically for Docker and may not fully support Podman’s socket emulation or require workarounds.
  • Want compatibility with widely used Docker-based environments. If your team, organization, or production environment uses Docker, maintaining consistency across development and deployment environments reduces friction and potential issues.
  • Require Docker-specific features such as Docker Swarm for orchestration, BuildKit for advanced image building, or Docker Content Trust for image signing and verification.
  • Need extensive documentation and community support. Docker’s longer history means more tutorials, Stack Overflow answers, and community resources are available for troubleshooting and learning.

In this tutorial, Docker is used because of its mature ecosystem, built-in Compose functionality, and widespread industry adoption. However, if your primary concerns are security through rootless containers and avoiding daemon dependencies, Podman is an excellent alternative that you can explore after understanding container fundamentals with Docker.

Step 1 — Installing Docker

The Docker installation package available in the default Rocky Linux repositories may not be the latest version. To get the latest version, install Docker from the official Docker repository. This section shows you how to do just that.

But first, update the package database:

  1. sudo dnf check-update

Next, install the required package to manage repositories and add the official Docker repository:

  1. sudo dnf install -y dnf-plugins-core
  2. sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo

While there is no Rocky Linux–specific repository provided by Docker, Rocky Linux is binary-compatible with Red Hat Enterprise Linux (RHEL) and can use the RHEL-compatible Docker repository. With the repository added, install Docker along with its required components:

  1. sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

After installation has completed, start the Docker daemon:

  1. sudo systemctl start docker

Verify that it’s running:

  1. sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:

Output
● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled) Active: active (running) since Fri 2026-03-27 06:54:35 UTC; 31min ago Invocation: 0c823f2740a0481cb25d0c746598fb96 TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 4276 (dockerd) Tasks: 9 Memory: 127.8M (peak: 134M) CPU: 1.836s CGroup: /system.slice/docker.service └─4276 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Lastly, make sure it starts at every server reboot:

  1. sudo systemctl --now enable docker

Installing Docker provides both the Docker service (daemon) and the docker command-line utility. In the next steps, you’ll verify the installation and begin using Docker.

Step 2 — Executing Docker Command Without Sudo (Optional)

By default, running the docker command requires root privileges, meaning you have to prefix the command with sudo. This is because the Docker daemon runs as the root user and manages system-level resources.

It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

Output
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

  1. sudo usermod -aG docker $(whoami)

You will need to log out of the server and back in for this change to take effect.

If you need to add a different user to the docker group, specify the username explicitly:

  1. sudo usermod -aG docker username

Security Warning: Adding a user to the docker group grants privileges equivalent to the root user. Members of this group can control the Docker daemon and access the host system. Only add trusted users, especially on shared or production systems.

The rest of this tutorial assumes you are running the docker command as a user in the docker group. If you choose not to, prepend the commands with sudo.

Understanding Docker Permissions

The Docker daemon (dockerd) runs as the root user and listens on a Unix socket located at /var/run/docker.sock. This socket serves as the primary communication channel between the Docker client (the docker command you run in your terminal) and the Docker daemon. The socket file itself has restricted permissions by default, allowing only root and members of the docker group to access it.

When you add a user to the docker group, you’re granting them read and write access to this socket. This allows them to send commands to the Docker daemon without needing to elevate their privileges using sudo for each operation. The Docker daemon then executes these commands with root privileges, enabling container operations that require system-level permissions such as manipulating network interfaces, mounting filesystems, and managing kernel features like namespaces and cgroups.

This convenience comes with an important security trade-off: because the Docker daemon runs with root privileges and executes commands on behalf of docker group members, those users effectively have root-equivalent privileges on the host system. The ability to instruct a root-privileged daemon to perform operations makes docker group membership functionally equivalent to having direct root access.

Security Implications

Understanding the specific risks associated with docker group membership is essential for making informed decisions about user permissions on your system. Docker group members can:

  • Mount host directories into containers with full read-write access. This includes sensitive system directories like /etc, /root, /home, and even /var/run where additional Unix sockets may reside. Once mounted, the container can read, modify, or delete any files in these directories, bypassing normal file permission checks since the Docker daemon performs the mount operation as root.
  • Escalate privileges by creating containers with elevated capabilities or by running containers in privileged mode. A user can start a container that has access to all host devices (--privileged flag), run processes with specific Linux capabilities that grant kernel-level permissions, or manipulate the host’s process namespace to interact with or modify processes running outside the container.
  • Access sensitive data that would normally be protected by file permissions. Even if a user cannot directly read /etc/shadow or private SSH keys on the host, they can mount these file locations into a container where they have root access inside the container, allowing them to read or copy the sensitive files.
  • Bind to privileged network ports (ports below 1024) and manipulate network traffic. Container networking is managed by the Docker daemon with root privileges, so docker group members can bind containers to any port, create custom network bridges, manipulate iptables rules through Docker’s network drivers, and potentially intercept or redirect network traffic on the host.
  • Consume system resources without normal user restrictions. Docker group members can create containers that consume all available CPU, memory, or disk space on the host system, potentially causing denial-of-service conditions. They can also create large numbers of containers or images, filling up storage allocated to Docker.
  • Execute arbitrary commands as root on the host system by leveraging container capabilities. Since containers share the host kernel and the Docker daemon runs as root, a user can craft container configurations that effectively give them code execution with root privileges on the underlying host.

Best Practices

Given these significant security implications, follow these guidelines when managing docker group membership:

  • Development Environments: On personal development machines where you are the sole user and trust yourself, adding your user to the docker group is a practical convenience that streamlines development workflows.
  • Shared Systems: On multi-user systems, be extremely selective about docker group membership. Only add users who already have sudo or root access, or who absolutely require Docker access and are fully trusted with root-equivalent privileges. Consider each docker group member as having full administrative control over the server.
  • Production Environments: In production, avoid adding regular users to the docker group. Instead, use orchestration tools like Kubernetes, configuration management systems, or CI/CD pipelines that manage containers through service accounts with appropriate restrictions. If you must grant Docker access, implement additional security layers such as Docker authorization plugins, SELinux policies, or AppArmor profiles that restrict what docker group members can do.
  • Alternative Approaches: For shared environments where multiple users need container access without root privileges, consider using Podman’s rootless mode, which allows users to run containers without requiring a privileged daemon or special group membership. This provides strong isolation between users while maintaining container functionality.

Step 3 — Using the Docker Command

With Docker installed and working, now’s the time to become familiar with the command-line utility. Using docker consists of passing it a chain of options and subcommands followed by arguments. The syntax takes this form:

  1. docker [option] [command] [arguments]

To view all available subcommands, type:

  1. docker --help

The list of available subcommands will vary depending on your installed Docker version. Some commonly used commands include:

Output
Usage: docker [OPTIONS] COMMAND A self-sufficient runtime for containers Common Commands: run Create and run a new container from an image exec Execute a command in a running container ps List containers build Build an image from a Dockerfile bake Build from a file pull Download an image from a registry push Upload an image to a registry images List images login Authenticate to a registry logout Log out from a registry search Search Docker Hub for images version Show the Docker version information info Display system-wide information ...

To view the options available to a specific command, type:

  1. docker docker-subcommand --help

To view system-wide information, use:

  1. docker info

Step 4 — Working with Docker Images

Docker containers are run from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need to run Docker containers have images available there.

To check whether you can access and download images from Docker Hub, type:

  1. docker run hello-world

The output, which should include the following, indicates that Docker is working correctly:

Output
Hello from Docker! This message shows that your installation appears to be working correctly.

You can search for images available on Docker Hub by using the docker search command. For example, to search for the Rocky Linux image, type:

  1. docker search rockylinux

The command returns a list of images whose names match the search string. The output will be similar to this:

Output
NAME DESCRIPTION STARS OFFICIAL AUTOMATED rockylinux/rockylinux 102 rockylinux The official build of Rocky Linux. 317 [OK] rockylinux/rocky-toolbox Toolbox image for Rocky Linux - https://gith… 2 rockylinux/rockylinux-shim RockyLinux shim-review images 0 unidata/rockylinux 0 amd64/rockylinux The official build of Rocky Linux. 2 litmusimage/rockylinux 0 uacontainers/rockylinux Up-to-date Rocky Linux Docker images with th… 1 arm64v8/rockylinux The official build of Rocky Linux. 7 cctbx/rockylinux 0 ...

In the OFFICIAL column, OK indicates an image built and maintained by the organization behind the project. Once you’ve identified the image you would like to use, you can download it using the pull subcommand:

  1. docker pull rockylinux

After an image has been downloaded, you can run a container using the run subcommand:

  1. docker run rockylinux

If the image is not already available locally, Docker will automatically download it before running the container.

Note: This container exits immediately because no interactive process is attached.

To see the images that have been downloaded to your system, type:

  1. docker images

The output should look similar to the following:

Output
IMAGE ID DISK USAGE CONTENT SIZE EXTRA hello-world:latest 452a468a4bf9 21.8kB 9.49kB U rockylinux:latest fc370d748f4c 289MB 75.7MB U

As you’ll see later in this tutorial, images that you use to run containers can be modified and used to create new images, which can then be uploaded (pushed) to Docker Hub or other Docker registries.

Step 5 — Running a Docker Container

The hello-world container you ran in the previous step is an example of a container that runs and exits after displaying a message. Containers, however, can also be interactive and run long-lived processes.

As an example, let’s run a container using the latest Rocky Linux image. The combination of the -i (interactive) and -t (pseudo-TTY) options allows you to access a shell inside the container:

  1. docker run -it rockylinux

Your command prompt should change to reflect that you’re now working inside the container. It will look similar to this:

Output
[root@<container-id> /]#

Note: The value shown in the prompt (for example, <container-id>) is the unique identifier of the running container.

You can now run commands inside the container. For example, install the MariaDB server:

  1. dnf install -y mariadb-server

You do not need to prefix commands with sudo inside the container because you are operating as the root user by default.

To exit the container, type:

  1. exit

Step 6 — Using Docker Compose for Multi-Container Applications

Docker Compose is a tool that simplifies the process of defining and running multi-container Docker applications. Instead of starting each container individually with separate docker run commands, Docker Compose allows you to define all your application’s services, networks, and volumes in a single YAML configuration file. With one command, you can then create and start all the services from your configuration, making it ideal for development environments, testing, and simple production deployments.

Docker Compose is particularly valuable when your application consists of multiple interconnected services. For example, a typical web application might include a web server, an application server, a database, and a cache, each running in its own container. Docker Compose manages the lifecycle of all these containers together, ensuring they can communicate with each other and start in the correct order.

Docker Compose is included as a plugin with modern Docker installations (Docker Engine 20.10 and later). The plugin integrates directly with the Docker CLI, so you use docker compose as a subcommand rather than a separate docker-compose binary. To verify it is available on your system, run:

  1. docker compose version

You should see output similar to:

Output
Docker Compose version v5.1.1

If the command is not found, Docker Compose may not have been installed correctly. Refer back to Step 1 to ensure you installed the docker-compose-plugin package.

Creating a Multi-Container Application

To demonstrate Docker Compose, you’ll create a simple multi-container application consisting of a web server and a Redis cache. This example shows how Docker Compose orchestrates multiple services that might work together in a real application.

Create a file named docker-compose.yml in your current directory:

docker-compose.yml
services: web: image: nginx:latest ports: - "8080:80" redis: image: redis:latest

This YAML configuration file defines the structure of your multi-container application. Let’s break down each section:

  • services:: Defines the containers that make up your application. Each service runs in its own container, and Docker Compose manages them as a group.

  • web:: Defines a service named “web” that will run an NGINX web server. The service name becomes the hostname that other containers can use to communicate with this service on the default Docker Compose network.

    • image: nginx:latest: Specifies that this service uses the official NGINX image from Docker Hub. The latest tag pulls the most recent stable version. Docker Compose will automatically pull this image from Docker Hub if it’s not already available locally.

    • ports: - "8080:80": Maps port 8080 on your host machine to port 80 inside the container. This allows you to access the NGINX web server by visiting http://your_server_ip:8080 in your browser, replacing your_server_ip with your Rocky Linux server’s IP address or hostname. The format is "HOST_PORT:CONTAINER_PORT". NGINX listens on port 80 by default inside the container, and this mapping makes it accessible from your host system.

  • redis:: Defines a service named “redis” that will run a Redis in-memory data store.

    • image: redis:latest: Uses the official Redis image from Docker Hub. Since no port mapping is specified, Redis is only accessible to other containers within the Docker Compose network (like the web service), not from the host machine. This is a common pattern for backend services that don’t need direct external access.

When services are defined in the same docker-compose.yml file, Docker Compose automatically creates a dedicated network for them, allowing the containers to communicate with each other using their service names as hostnames. For example, the web container could connect to Redis using the hostname redis.

Starting the Application

To create and start all the services defined in your docker-compose.yml file, run:

  1. docker compose up -d

The -d flag runs the containers in detached mode, meaning they run in the background and don’t occupy your terminal. Docker Compose performs several actions when you run this command:

  1. Creates a dedicated network for your application (if one doesn’t already exist)
  2. Pulls any images that aren’t already available locally (in this case, nginx:latest and redis:latest)
  3. Creates containers for each service defined in the configuration
  4. Starts the containers in the appropriate order
  5. Configures networking so the containers can communicate with each other

You should see output similar to this:

Output
[+] Running 3/3 ✔ Network myapp_default Created ✔ Container myapp-redis-1 Started ✔ Container myapp-web-1 Started

The output shows that Docker Compose created a network and successfully started both containers. Docker Compose automatically generates container names by combining your directory name (or project name) with the service name and an index number.

If you run the command without the -d flag, Docker Compose will display the log output from all containers in your terminal, which is useful for debugging but prevents you from running other commands in that terminal session.

Verifying the Containers

To confirm that both services are running correctly, use the docker ps command:

  1. docker ps

You should see both the nginx and redis containers running. The output will look similar to this:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a1b2c3d4e5f6 nginx:latest "/docker-entrypoint.…" 30 seconds ago Up 28 seconds 0.0.0.0:8080->80/tcp myapp-web-1 f6e5d4c3b2a1 redis:latest "docker-entrypoint.s…" 30 seconds ago Up 29 seconds 6379/tcp myapp-redis-1

The output provides several important details:

  • CONTAINER ID: A unique identifier for each container
  • IMAGE: The Docker image each container is running
  • COMMAND: The default command that was executed when the container started
  • CREATED: How long ago the container was created
  • STATUS: Shows the container is up and running, along with its uptime
  • PORTS: Shows port mappings. For the web service, you can see 0.0.0.0:8080->80/tcp, indicating that port 8080 on all host network interfaces is forwarded to port 80 in the container. For the redis service, port 6379 is exposed within the Docker network but not mapped to the host.
  • NAMES: The automatically generated container names

To verify the web server is working correctly, you can access it by opening a web browser and visiting:

http://localhost:8080

You should see the default NGINX welcome page, which confirms that the web server is running and accessible. The page typically displays “Welcome to nginx!” along with basic information about the server.

Alternatively, you can test it from the command line using curl:

  1. curl http://localhost:8080

This will display the HTML of the NGINX welcome page in your terminal.

Stopping the Application

When you’re finished working with your multi-container application, you can stop and remove all containers, networks, and other resources created by Docker Compose with a single command:

  1. docker compose down

This command performs a clean shutdown by:

  1. Stopping all running containers defined in the docker-compose.yml file gracefully
  2. Removing the stopped containers from your system
  3. Removing the dedicated network created for the application
  4. Preserving the downloaded images for faster startup next time

You should see output similar to:

Output
[+] down 3/3 ✔ Container myapp-web-1 Removed ✔ Container myapp-redis-1 Removed ✔ Network myapp_default Removed

The containers are removed, but the Docker images (nginx:latest and redis:latest) remain on your system. This means the next time you run docker compose up, the containers will start much faster since the images don’t need to be downloaded again.

If you want to verify that the containers have been removed, run docker ps -a again, and they should no longer appear in the list.

Step 7 — Committing Changes in a Container to a Docker Image

When you start a container from a Docker image, you can create, modify, and delete files just like on a regular system. These changes apply only to that container. You can start and stop it, but if you remove it using the docker rm command, the changes will be lost.

This section shows you how to save the state of a container as a new Docker image.

After installing MariaDB inside the Rocky Linux container, you now have a container that differs from the original image used to create it.

To save the state of the container as a new image, first exit from it:

  1. exit

Then commit the changes to a new Docker image using the following command. The -m option specifies a commit message, and -a specifies the author. The container ID is the one you noted earlier:

  1. docker commit -m "What did you do to the image" -a "Author Name" container-id repository/new_image_name

For example:

  1. docker commit -m "added mariadb-server" -a "Author Name" container-id sammy/rockylinux-mariadb

Note: The new image is saved locally on your system. You can push it to a registry like Docker Hub to share it with others.

Best Practice: While docker commit is useful for quick experiments, production workflows typically use a Dockerfile. Dockerfiles provide a repeatable and version-controlled way to build images.

After the operation completes, list the Docker images on your system:

  1. docker images

The output should be similar to this:

[secondary_label Output]    
IMAGE                             ID             DISK USAGE   CONTENT SIZE   EXTRA
hello-world:latest                452a468a4bf9       21.8kB         9.49kB    U
rockylinux:latest                 fc370d748f4c        289MB         75.7MB    U
sammy/rockylinux-mariadb:latest   c3fce61a8dbf        855MB          261MB

In this example, rockylinux-mariadb is the new image derived from the Rocky Linux base image. The size difference reflects the changes made, such as installing MariaDB.

Step 8 — Listing Docker Containers

After using Docker for a while, you’ll have multiple containers on your system, some running and others stopped.

To view currently running containers, use:

  1. docker ps

You will see output similar to the following:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f7c79cc556dd rockylinux "/bin/bash" 3 hours ago Up 3 hours silly_spence

To view all containers (both running and stopped), use the -a option:

  1. docker ps -a

To view the most recently created container, use the -l option:

  1. docker ps -l

The STATUS column indicates the state of the container, such as:

  • Up: The container is currently running
  • Exited: The container has stopped

To stop a running container, use:

  1. docker stop container-id

You can find the container-id in the output of the docker ps command.

Step 9 — Pushing Docker Images to a Docker Repository

The next logical step after creating a new image from an existing image is to share it with others on Docker Hub or another Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

This section shows you how to push a Docker image to Docker Hub.

To create an account on Docker Hub, register at Docker Hub. After creating your account, log into Docker Hub from your terminal. You’ll be prompted to authenticate:

  1. docker login -u docker-registry-username

Enter your password when prompted. If you specified the correct password, authentication should succeed. Then you can push your image using:

  1. docker push docker-registry-username/docker-image-name

It may take some time for the upload to complete. When finished, the output will look similar to this:

Output
The push refers to a repository [docker.io/sammy/rockylinux-mariadb] 670194edfaf5: Pushed 5f70bf18a086: Mounted from library/rockylinux 6a6c96337be1: Mounted from library/rockylinux ...

After pushing an image to a registry, it should be listed on your account’s dashboard.

If a push attempt results in an error of this sort, then you likely did not log in:

Output
The push refers to a repository [docker.io/sammy/rockylinux-mariadb] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

Log in, then repeat the push attempt.

FAQs

1. How do I install Docker on Rocky Linux?

To install Docker on Rocky Linux, follow these steps:

  1. Update the package database:

    1. sudo dnf check-update
  2. Install repository management tools:

    1. sudo dnf install -y dnf-plugins-core
  3. Add the Docker repository:

    1. sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
  4. Install Docker and components:

    1. sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  5. Start the Docker service:

    1. sudo systemctl start docker
  6. Enable Docker to start at boot:

    1. sudo systemctl enable docker

2. Can you run Docker on Rocky Linux?

Yes, you can run Docker on Rocky Linux. Rocky Linux is binary-compatible with Red Hat Enterprise Linux (RHEL), allowing it to use Docker’s RHEL-compatible repositories without issues. While Rocky Linux includes Podman as the default container runtime, Docker remains a popular choice due to its mature ecosystem, extensive documentation, Docker Compose integration, and widespread industry adoption. Many organizations prefer Docker for its compatibility with existing workflows, CI/CD pipelines, and third-party tools that expect Docker-specific APIs.

3. How do I install Docker Compose on Rocky Linux?

Docker Compose is included as a plugin when you install Docker from the official repository:

  • Automatic installation: Install the docker-compose-plugin package along with Docker using the command:

    1. sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  • Verify installation: Confirm Docker Compose is available:

    1. docker compose version

Use docker compose (with a space) instead of the older docker-compose (with a hyphen) for all operations

4. Does Docker need sudo on Rocky Linux?

By default, yes, but you can configure it to work without sudo.

Docker requires sudo privileges because the Docker daemon runs as root and manages system-level resources. However, you can configure Docker to work without sudo by adding your user to the docker group using sudo usermod -aG docker $(whoami). After running this command, you need to log out and back in for the change to take effect, and then you can run Docker commands without sudo.

Be aware that this convenience comes with significant security implications. Members of the docker group effectively have root-equivalent privileges on the host system, as they can mount sensitive directories, create privileged containers, and execute commands with elevated permissions. Only add trusted users to the docker group, especially on shared or production systems.

5. What is the difference between Docker CE and Podman on Rocky Linux?

Docker CE (Community Edition) and Podman have several fundamental differences:

Aspect Docker Podman
Architecture Uses a client-server architecture with a central daemon (dockerd) that runs with root privileges and manages all containers Daemonless. Each container runs as a direct child process without requiring a background service
Security Can run rootless containers but requires additional configuration Supports rootless containers by default, allowing non-privileged users to run containers without root access
Ecosystem Broader ecosystem support including Docker Hub, Docker Compose, Docker Desktop, and extensive third-party tool integration Offers Docker CLI compatibility and socket emulation, but some tools may require additional configuration
CI/CD Integration Better integration with CI/CD systems and cloud platforms May require additional configuration for some CI/CD tools
Compose Support Docker Compose works seamlessly out of the box Has podman-compose as an alternative

Recommendation: Choose Docker for ecosystem compatibility and mature tooling. Choose Podman for enhanced security through rootless operation and daemonless architecture.

6. How do I remove conflicting packages before installing Docker on Rocky Linux?

Before installing Docker from the official repository:

  1. Run the removal command:

    1. sudo dnf remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine podman runc
  2. This removes:

    • Older Docker packages with different naming conventions
    • Podman if installed
    • Related container runtime packages
  3. The command is safe to run even if none of these packages are installed; DNF will simply report they’re not present

  4. After removing conflicting packages, add the Docker repository and install Docker CE as described in Step 1 of this tutorial

7. How do I verify that Docker is installed and running correctly on Rocky Linux?

Follow these verification steps:

  1. Check the service status:

    1. sudo systemctl status docker

    Confirm the output shows “active (running)”

  2. Run a test container:

    1. docker run hello-world

    (Use with or without sudo depending on your docker group membership)

  3. Verify the test output shows a message confirming your installation is working.

  4. Check the Docker version:

    1. docker --version

    For detailed client and server information, use:

    1. docker version
  5. View system-wide information:

    1. docker info

    This displays details including:

    • Number of containers and images
    • Storage driver configuration
    • Daemon settings
    • Runtime information

If all these commands execute without errors, your Docker installation is working correctly.

8. How do I push a Docker image to Docker Hub from Rocky Linux?

Follow these steps to push an image to Docker Hub:

  1. Create a Docker Hub account if you don’t have an account.

  2. Log in from the terminal:

    1. docker login -u your-username

    Enter your password when prompted

  3. Tag your image using the format username/image-name. If you’ve already created an image, tag it with:

    1. docker tag existing-image-name username/new-image-name
  4. Push the image:

    1. docker push username/image-name
  5. Wait for the upload to complete. The upload may take several minutes depending on image size and network speed.

  6. Verify on Docker Hub. Once complete, the image will appear in your Docker Hub account dashboard.

  7. Your image is now available for others to pull:

    1. docker pull username/image-name

If you encounter an “unauthorized: authentication required” error, verify you’re logged in with docker login before attempting the push again

Conclusion

In this tutorial, you installed Docker on a Rocky Linux server and learned the fundamentals of working with containers and images. You explored Docker’s architecture and also learned how to configure Docker permissions, understanding both the convenience and security implications of docker group membership.

Beyond installation, you learned how to pull and run containers, work with Docker images, use Docker Compose to manage multi-container applications, commit container changes to create custom images, and push images to Docker Hub for distribution. These skills form the foundation for containerizing applications, managing development environments, and deploying services consistently across different systems.

As next steps, you can explore writing Dockerfiles to automate image builds, experiment with Docker networking and volumes for persistent data storage, or integrate Docker into your CI/CD pipelines. The container management skills you’ve developed here will serve you well whether you’re building microservices, setting up development environments that match production, or deploying applications at scale with orchestration tools like Kubernetes. For more Docker-related tutorials, check out the following articles:

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the author(s)

Manikandan Kurup
Manikandan Kurup
Editor
Senior Technical Content Engineer I
See author profile

With over 6 years of experience in tech publishing, Mani has edited and published more than 75 books covering a wide range of data science topics. Known for his strong attention to detail and technical knowledge, Mani specializes in creating clear, concise, and easy-to-understand content tailored for developers.

Still looking for an answer?

Was this helpful?


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Creative CommonsThis work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Start building today

From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.