By Erika Heidi and Vinayak Baranwal

Ansible handlers are named tasks that run only after being notified, and only when at least one notifying task reports a change. They run once per play after regular tasks finish, which keeps service restarts and reloads tied to actual configuration updates.
Handlers solve a recurring operations problem: restarting or reloading daemons on every playbook run even when nothing changed. By coupling those actions to changed results, you avoid unnecessary downtime and noisy logs while keeping deployments idempotent. The sections that follow define handler syntax, compare handlers to regular tasks, walk through an Nginx restart example, cover the listen directive and meta: flush_handlers, show handlers inside roles, and document advanced patterns plus common failure modes.
changed, then execute after normal tasks in the current play complete.handlers:, not the order in which tasks notified them.notify value must match the handler name (or a listen topic) exactly, including capitalization and spaces, or the handler will not run.listen directive lets several handlers subscribe to one topic name so a single notification can trigger multiple services.ansible.builtin.meta with flush_handlers runs all handlers notified so far immediately, which matters when a later task depends on a restart having already happened.roles/<rolename>/handlers/main.yml and follow the same timing as playbook handlers, but name collisions between role-level and playbook-level handlers can cause duplicate restarts or confusing logs.become).Handlers are deferred tasks. Ansible collects notifications as the play runs, then runs the handler section after all tasks in that play succeed (unless you flush early or use CLI flags described later). That distinction is what separates handlers from ordinary tasks: ordinary tasks always run in order; handlers run only when notified, and execute at end of play unless you flush them early.
These rules govern handler execution order:
notify runs.handlers:, not the order in which they were notified.Together, those rules make handler behavior predictable for rolling updates and multi-step config changes.
To see deduplication in practice, consider a play where three tasks all notify the same handler:
tasks:
- name: Update main configuration
ansible.builtin.copy:
src: files/main.conf
dest: /etc/myapp/main.conf
notify: Restart myapp
- name: Update secondary configuration
ansible.builtin.copy:
src: files/secondary.conf
dest: /etc/myapp/secondary.conf
notify: Restart myapp
- name: Update log configuration
ansible.builtin.copy:
src: files/log.conf
dest: /etc/myapp/log.conf
notify: Restart myapp
handlers:
- name: Restart myapp
ansible.builtin.service:
name: myapp
state: restarted
Even if all three tasks report changed, the output shows the handler running exactly once, after all tasks complete:
TASK [Update main configuration] ****************************************************************
changed: [203.0.113.10]
TASK [Update secondary configuration] ***********************************************************
changed: [203.0.113.10]
TASK [Update log configuration] *****************************************************************
changed: [203.0.113.10]
RUNNING HANDLER [Restart myapp] *****************************************************************
changed: [203.0.113.10]
PLAY RECAP **************************************************************************************
203.0.113.10 : ok=4 changed=4 unreachable=0 failed=0 skipped=0
The handler appears once in RUNNING HANDLER regardless of how many tasks notified it.
| Behavior | Tasks | Handlers |
|---|---|---|
| Execution trigger | Runs every time the play reaches the task (unless skipped) | Runs only when notified by a task that reported changed (unless forced) |
| Deduplication | Each task runs each time it is reached | Each handler runs at most once per play, even after many notifications |
| Placement in playbook | Lives under tasks: (or included task files) |
Lives under handlers: in the play, role, or imported handler files |
| Use case | Install packages, template files, assert state | React to change: restart services, reload daemons, run post-change hooks |
Use a handler when the work should happen only if something changed and the same action might be needed from several tasks (for example, one restart after many file updates). Use a when condition on a task when the decision depends on variables, facts, or inventory, not on whether a specific resource just changed. Combining changed_when with handlers is common: if a module incorrectly reports ok, setting changed_when ensures notifications still fire correctly and handler execution stays conditional on actual change.
A play declares handlers in a handlers: list at the same indentation level as tasks:. Each handler is a YAML list item with a name and one module call.
This minimal block shows only the handlers: section with one entry using ansible.builtin.service:
handlers:
- name: Restart example service # Handler name referenced by notify
ansible.builtin.service: # Module (FQCN)
name: example # Service unit name
state: restarted # Desired service state
Any task can include notify with a string that matches a handler name. Notify matching is literal: the string must be identical to the handler name, including capitalization and spacing.
tasks:
- name: Deploy configuration
ansible.builtin.copy:
src: files/app.conf
dest: /etc/myapp/app.conf
notify: Restart example service # Must match handlers[].name exactly
Ansible triggers handlers only when the notifying task reports changed. A task reports changed only when it modifies state on the target host (for example, a file update). If the resource is already correct, the task reports ok and does not notify.
This walkthrough keeps the original playbook-12.yml flow: it installs Nginx, prepares a document root, applies a template, uses ansible.builtin.replace to point the default site at a new root, opens port 80, and notifies a Restart Nginx handler. It is a concrete pattern for managing web stack restarts idempotently.
Supply files/landing-page.html.j2 on the control node as in the Ansible playbook series, or replace the template task with a static file copy if you prefer.
Create playbook-12.yml in your ansible-practice directory:
- cd ~/ansible-practice
- nano playbook-12.yml
Add the following:
---
- hosts: all
become: yes
vars:
page_title: My Second Landing Page
page_description: This is my second landing page description.
doc_root: /var/www/mypage
tasks:
- name: Install Nginx
apt:
name: nginx
state: latest
- name: Make sure new doc root exists
file:
path: "{{ doc_root }}"
state: directory
mode: '0755'
- name: Apply Page Template
template:
src: files/landing-page.html.j2
dest: "{{ doc_root }}/index.html"
- name: Replace document root on default Nginx configuration
replace:
path: /etc/nginx/sites-available/default
regexp: '(\s+)root /var/www/html;(\s+.*)?$'
replace: \g<1>root {{ doc_root }};\g<2>
notify: Restart Nginx # Notifies handler below when this task is "changed"
- name: Allow all access to tcp port 80
ufw:
rule: allow
port: '80'
proto: tcp
handlers:
- name: Restart Nginx # Handler name must match notify string exactly
service:
name: nginx
state: restarted
Note: Newer style guides recommend fully qualified collection names (FQCN) for modules, for example ansible.builtin.apt instead of apt. This file keeps short names for continuity with the original series; in new playbooks, prefer FQCN everywhere.
Save and close the file.
The replace task looks for a pattern and rewrites the default Nginx site root. When it changes the file, Ansible notifies Restart Nginx. The handler runs after the remaining tasks in the play because handlers are deferred to the end of the play unless you insert meta: flush_handlers.
Run the playbook with privilege escalation. Use -K if your SSH user needs a sudo password:
- ansible-playbook -i inventory playbook-12.yml -u sammy -K
BECOME password:
PLAY [all] **********************************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [203.0.113.10]
TASK [Install Nginx] ************************************************************************************
ok: [203.0.113.10]
TASK [Make sure new doc root exists] ********************************************************************
changed: [203.0.113.10]
TASK [Apply Page Template] ******************************************************************************
changed: [203.0.113.10]
TASK [Replace document root on default Nginx configuration] *********************************************
changed: [203.0.113.10]
TASK [Allow all access to tcp port 80] ******************************************************************
ok: [203.0.113.10]
RUNNING HANDLER [Restart Nginx] *************************************************************************
changed: [203.0.113.10]
PLAY RECAP **********************************************************************************************
203.0.113.10 : ok=7 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The RUNNING HANDLER line appears after the ordinary tasks, immediately before PLAY RECAP. Visiting the server IP in a browser should show the updated landing page:

Note: On later runs, if the replace task finds no matching text, it reports ok, not changed, so the handler does not run. That idempotent behavior prevents redundant restarts when configuration is already correct.
The listen directive registers a topic name. Tasks notify that topic, and every handler that lists the same listen value is scheduled. The handler still keeps its own name for logging and for direct notification by name.
Use listen when one change should trigger several reactions, for example restarting both Nginx and PHP-FPM after PHP or vhost updates. It avoids long comma-separated notify lists and keeps each handler focused on one service.
tasks:
- name: Deploy PHP-FPM pool configuration
ansible.builtin.copy:
src: files/www.conf
dest: /etc/php/8.2/fpm/pool.d/www.conf
notify: "web stack restarted" # Topic name; can match multiple listen values
handlers:
- name: Restart Nginx
listen: "web stack restarted" # Subscribes handler to the topic
ansible.builtin.service:
name: nginx
state: restarted
- name: Restart PHP-FPM
listen: "web stack restarted" # Same topic, second handler
ansible.builtin.service:
name: php8.2-fpm
state: restarted
Ansible deduplicates each handler independently, but both handlers run once before the play ends because they are distinct handler entries.
If a task notifies the topic name and another task notifies a handler’s individual name in the same play, that handler is still deduplicated to a single execution. The handler does not run twice just because it was notified through two different paths.
tasks:
- name: Deploy vhost config
ansible.builtin.copy:
src: files/vhost.conf
dest: /etc/nginx/sites-available/mysite.conf
notify: "web stack restarted" # Notifies via topic
- name: Rotate Nginx TLS certificate
ansible.builtin.copy:
src: files/cert.pem
dest: /etc/nginx/ssl/cert.pem
notify: Restart Nginx # Notifies Nginx handler directly by name
handlers:
- name: Restart Nginx
listen: "web stack restarted"
ansible.builtin.service:
name: nginx
state: restarted
In this play, Restart Nginx is notified twice (once through the topic and once by name), but it still runs exactly once.
meta: flush_handlers immediately executes every handler that has been notified up to that point in the play, without waiting for the remaining tasks to finish. Insert it as a task using ansible.builtin.meta anywhere in your task list where a subsequent task depends on the handler having already run.
By default, notified handlers run only after all tasks in the current play finish. That is usually what you want, but it breaks down when a later task assumes a service already restarted. For example, a health check that runs mid-play will hit the old process until handlers run, so the play can fail even though the fix was already notified.
Use meta: flush_handlers when a task must see post-restart state before the play ends. Typical cases include smoke tests, API checks, or follow-up tasks that read files a service recreates on restart.
---
- hosts: all
become: true
tasks:
- name: Deploy application configuration
ansible.builtin.copy:
src: files/app.conf
dest: /etc/myapp/app.conf
notify: Restart myapp
- name: Apply pending handler restarts now
ansible.builtin.meta: flush_handlers # Runs notified handlers before the next task
- name: Verify HTTP health endpoint after restart
ansible.builtin.uri:
url: http://127.0.0.1:8080/health
status_code: 200
handlers:
- name: Restart myapp
ansible.builtin.service:
name: myapp
state: restarted
Example output excerpt showing a mid-play handler:
TASK [Deploy application configuration] *****************************************************************
changed: [203.0.113.10]
RUNNING HANDLER [Restart myapp] *************************************************************************
changed: [203.0.113.10]
TASK [Verify HTTP health endpoint after restart] **********************************************************
ok: [203.0.113.10]
Note: meta: flush_handlers only flushes handlers that have already been notified when the meta task runs. Notifications issued later in the play still run at end of play unless you flush again.
Roles package tasks, handlers, defaults, and other content together. Handlers in roles use the same notify and execution rules as play-level handlers; the difference is file layout and reuse across plays. For a broader picture of playbook structure, see Configuration Management 101: Writing Ansible Playbooks.
Ansible reads handler tasks from handlers/main.yml inside the role. A typical tree:
roles/
└── webserver/
├── tasks/
│ └── main.yml
└── handlers/
└── main.yml
roles/webserver/handlers/main.yml might contain:
# roles/webserver/handlers/main.yml
- name: Restart Nginx
ansible.builtin.service:
name: nginx
state: restarted
A task in roles/webserver/tasks/main.yml notifies it like any other handler:
# roles/webserver/tasks/main.yml
- name: Install site configuration
ansible.builtin.template:
src: site.conf.j2
dest: /etc/nginx/sites-available/default
notify: Restart Nginx
Role handlers use the same end-of-play execution timing as handlers declared directly in a play. They are bundled with the role, but they still join the play’s global handler list for that run.
A playbook-level handler and a role-level handler that share the same name are still different handler objects. When a task inside the role uses notify: Restart Nginx, Ansible resolves that notification to the role-level handler, not the playbook-level one. However, if a playbook task also notifies Restart Nginx, it resolves to the playbook-level handler. Both handlers end up queued, and both run at end of play, meaning the service restarts twice. This is the most common unexpected behavior when handler names collide across scopes.
To confirm which handlers ran and in what order, use the -v flag:
- ansible-playbook -i inventory playbook.yml -v
The verbose output lists each RUNNING HANDLER line as it fires. When a role defines a handler, Ansible prefixes the handler name with the role name in the output (for example, RUNNING HANDLER [webserver : Restart Nginx]), which makes it straightforward to distinguish role-level handlers from playbook-level ones and identify where double execution is coming from.
Warning: Duplicate handler names in different scopes are a common source of handler-not-running confusion or double execution. Prefer unique names per play, or consolidate handlers in one place.
The following patterns extend basic handler behavior for multi-host inventories, loops, and failure recovery scenarios.
Set run_once: true on a handler when the side effect should happen a single time for the whole play, for example posting to a chat webhook or writing a shared summary artifact.
handlers:
- name: Notify deployment channel
ansible.builtin.debug:
msg: "Deployment finished for {{ inventory_hostname }}"
run_once: true
Warning: Do not use run_once: true on a handler that restarts or reloads a service when you have multiple hosts in the inventory. Ansible will run the handler on only one host and silently skip the rest, leaving the other hosts running the old process. Reserve run_once for side effects that are genuinely inventory-wide, such as sending a notification or writing a summary file to a shared location.
When notify appears on a looping task, each changed iteration can queue the same handler, but the handler still runs once at the end of the play.
- name: Install multiple snippets
ansible.builtin.copy:
src: "{{ item }}"
dest: "/etc/myapp/conf.d/{{ item | basename }}"
loop:
- files/a.conf
- files/b.conf
notify: Reload myapp
handlers:
- name: Reload myapp
ansible.builtin.service:
name: myapp
state: reloaded
If both loop iterations report changed, the output confirms the handler fires once, not twice:
TASK [Install multiple snippets] ****************************************************************
changed: [203.0.113.10] => (item=files/a.conf)
changed: [203.0.113.10] => (item=files/b.conf)
RUNNING HANDLER [Reload myapp] ******************************************************************
changed: [203.0.113.10]
By default, if the play fails before the handler phase, notified handlers do not run. Forcing handler execution is available through the CLI flag --force-handlers:
- ansible-playbook -i inventory playbook.yml --force-handlers
--force-handlers tells Ansible to run handlers that were notified even after a task failure. That can help leave services in a known state during recovery jobs, but it can also restart services based on partial changes.
For pipelines where forced handler execution should always apply, CLI-only usage is less visible in version-controlled configuration and easier to miss in shared CI jobs. Two alternatives are more appropriate for persistent configuration:
Set it at the playbook level with the force_handlers key:
- hosts: all
force_handlers: true
tasks:
...
Or set the environment variable on the control node before running the playbook:
- ANSIBLE_FORCE_HANDLERS=true ansible-playbook -i inventory playbook.yml
The playbook-level key is the most explicit and version-controlled option for CI/CD pipelines.
Warning: --force-handlers runs all notified handlers even if a task reported failure, which may leave services in an inconsistent state. Use it deliberately in CI/CD pipelines or recovery scenarios, and review task results before relying on the restarts.
Handler problems often show up only on full playbook runs. For quick checks against a host without a playbook, see How To Manage Multiple Servers With Ansible Ad Hoc Commands.
| Symptom | Likely Cause | Fix |
|---|---|---|
| Handler does not execute after task runs | Task reported ok, not changed |
Review task logic; use changed_when if the task cannot natively detect change |
| Handler name mismatch | notify string does not exactly match handler name |
Compare strings character by character including capitalization and spacing |
Handler skipped in --check mode |
Check mode does not apply changes, so tasks report no change | To verify handler logic without executing it, add --diff to see what would change, then do a full dry-run on a non-production host. There is no way to fire handlers in check mode without also applying the tasks. |
| Handler runs when not expected | Variable or loop scope is wider than intended, or changed_when: true is set unconditionally |
Scope changed_when conditionally; verify loop item change detection |
Note: The ansible-playbook -v or -vv flag outputs each handler notification and execution event. Use verbose mode as your first debugging step when a handler behaves unexpectedly.
Q: What is the purpose of using handlers within Ansible playbooks?
Handlers execute conditional tasks, such as restarting a service, only when a preceding task reports a change. This prevents unnecessary service interruptions when no configuration has changed.
Q: What is the difference between handlers and tasks in Ansible?
Tasks run unconditionally during each play execution. Handlers only run when explicitly notified by a task that reported a changed state, and they run once per play regardless of how many tasks notify them.
Q: When should you use a handler instead of a task with a when conditional?
Use a handler when the action should only occur if something changed and when it may be notified from multiple tasks. A when conditional on a task is better suited for logic based on a known variable or fact rather than on change state.
Q: What does meta: flush_handlers do and when should you use it?
meta: flush_handlers forces all currently notified handlers to execute immediately at that point in the play, rather than waiting until the end. Use it when a subsequent task depends on the handler having already run, such as when a service must be restarted before a health check task runs.
Q: How do you define handlers inside an Ansible role?
Place handler definitions in roles/rolename/handlers/main.yml. These handlers are available to all tasks within that role and follow the same notify syntax as playbook-level handlers.
Q: Why is my handler not running even though the task completed successfully?
The most common cause is that the task did not report a changed status. Handlers are only triggered on changed, not on ok. Also verify that the string in notify exactly matches the handler name field, including capitalization and spacing.
Q: Can multiple tasks notify the same handler?
Yes. If multiple tasks notify the same handler in a single play, the handler still runs only once at the end of the play, regardless of how many notifications were issued.
Q: Does the listen directive replace the handler name in a notify call?
No. The listen directive adds an additional topic that tasks can notify. A handler can have both a name and one or more listen values. Tasks can notify the handler using either the name or any of its listen topics.
This article covered how to define handlers, wire them with notify, group them with the listen directive, run them early with meta: flush_handlers, organize them in roles, and apply advanced options such as run_once, loops, and --force-handlers. It also compared handlers to ordinary tasks and walked through a practical Nginx example that demonstrates idempotent restart behavior.
You can now implement idempotent service management in your playbooks, avoid unnecessary restarts in production, and debug handlers that fail to run by checking change reporting, name matching, and verbose output.
To go further, structure reusable automation with How To Use Ansible Roles to Abstract Your Infrastructure Environment, operate across hosts with How To Manage Multiple Servers With Ansible Ad Hoc Commands, and revisit playbook fundamentals in Configuration Management 101: Writing Ansible Playbooks.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Ansible is a modern configuration management tool that doesn’t require the use of an agent software on remote nodes, using only SSH and Python to communicate and execute commands on managed servers. This series will walk you through the main Ansible features that you can use to write playbooks for server automation. At the end, we’ll see a practical example of how to create a playbook to automate setting up a remote Nginx web server and deploy a static HTML website to it.
Browse Series: 11 tutorials
Dev/Ops passionate about open source, PHP, and Linux. Former Senior Technical Writer at DigitalOcean. Areas of expertise include LAMP Stack, Ubuntu, Debian 11, Linux, Ansible, and more.
Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.