Basic organization of Ansible projects

In normal Ansible mode of operation, tasks are not usually directly in the playbooks. Instead, tasks are often organized into roles, and playbooks will include a list of roles to run, along with the hosts they are intended for.

Example: Role-based playbook example
- hosts: all #1 
  become: true
  roles: #2
    - basic

- hosts: controller
  become: true
  roles:
    - ntp_server

- hosts: all:!controller #3
  become: true
  roles:
    - ntp_others

- hosts: all
  become: true
  roles:
    - openstack_packages

- hosts: controller
  become: true
  roles:
    - sql_database
    - rabbitmq
    - memcached

  1. Hosts on which the indicated roles will be executed.
  2. List of roles to execute on the indicated hosts
  3. Run on all hosts except controller

Roles are defined in folders that give the role its name . In addition, the roles are created according to an established subfolder structure, which is as follows:

  • tasks: Includes the file main.yml with the list of tasks to execute. The execution of a task can trigger the execution of actions (eg restarting a service after modifying a configuration file). The task reports a pending action. The notified actions will be executed after finishing all the tasks of the role.
  • handlers: Includes the file main.yml with the list of actions for pending notifications.
  • templates: Includes the file templates that will be deployed on remote machines after variable substitution. The files will be placed in a folder structure similar to the one they will have on the destination host, taking the handlers. For example, a template to customize hosts on target machines would be placed in handlers/etc/hosts, since on target machines it is placed in ( /etc/hosts).

Example: Example of organization of a role

ntp_server/
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates
    └── etc
        └── chrony
            └── chrony.conf
When we go to create a role, we can create the role's folder and subfolder structure with a single command. The following command would create the folder for the role ntp_server and subfolders for handlers, tasks , and templates.
$ mkdir -p ntp_server/{handlers,tasks,templates}
An Ansible project would be organized like this:
├── ansible.cfg #1
├── group_vars #2
│   └── all.yml
├── hosts.cfg #3
├── playbook-1.yml #4
├── playbook-2.yml
├── ...
├── roles #5
│   ├── barbican
│   │   ├── handlers
│   │   │   └── main.yml
│   │   ├── tasks
│   │   │   └── main.yml
│   │   └── templates
│   │       └── etc
│   │           └── barbican
│   │               ├── barbican-api-paste.ini
│   │               └── barbican.conf
│   ├── ...
│   ├── heat
│   │   ├── handlers
│   │   │   └── main.yml
│   │   ├── tasks
│   │   │   └── main.yml
│   │   └── templates
│   │       └── etc
│   │           └── heat
│   │               └── heat.conf
│   ├── ...
└── site.yml #6

  1. Project configuration file (eg to indicate the inventory file)
  2. Variables accessible to all playbooks
  3. Host inventory file
  4. project playbooks
  5. Project Roles
  6. Optional playbook containing the call to all playbooks in the project

If an Ansible project contains a large number of playbooks, it's a good idea to create a new playbook that calls them all. This is done in Ansible by include

For example, site.yml it contains the call to all playbooks that perform complex deployment:

- include: playbook-basic.yml
- include: playbook-keystone.yml
- include: playbook-glance.yml
- include: playbook-nova.yml
- include: playbook-neutron.yml
...
Example: Example of tasks/main.yml with the tasks of a role
- name: Install chrony
  apt:
    name: chrony
    state: latest

- name: Setup chrony on controller
  template: > #1
    src=etc/chrony/chrony.conf
    dest=/etc/chrony/chrony.conf
    owner=root
    group=root
    mode=0644
  notify: restart chrony #2

  1. Using a template file
  2. Notification of execution of an action at the end of the role

Example: Template example templates/etc/chrony/chrony.conf

pool 2.debian.pool.ntp.org offline iburst

server {{ntp_server}} iburst #1
allow {{management_network}}/24

keyfile /etc/chrony/chrony.keys
commandkey 1
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
maxupdateskew 100.0
dumponexit
dumpdir /var/lib/chrony
logchange 0.5
hwclockfile /etc/adjtime
rtcsync

  1. Use of variables. The file will be created on the destination servers with the values ​​assigned to the variables (ntp_server: 1.es.pool.ntp.org)

Example: Example of handlers/main.yml

- name: restart chrony #1 
  service:
    name: chrony
    state: restarted

  1. The name of the handler notify must correspond to the one indicated in the task clause

No comments

Powered by Blogger.