Pulumi: Creating Kubernetes clusters with Rancher and OpenStack as infrastructure provider - Part 4

 

Pulumi has a Rancher package that allows the configuration of Rancher resources. In this use case we start from a previous installation of Rancher (see Use case. Creating a virtual machine configured with Rancher in OpenStack). The objective will be the creation of a Kubernetes cluster using Pulumi obtaining a replicable and repeatable deployment in our purpose of having the infrastructure as code. The Kubernetes cluster infrastructure will be offered by an OpenStack cloud. Therefore, we will have to start with the creation of different templates with the necessary configurations of the resources of the cluster nodes (we will use different templates for the control roles, database etcd and the worker nodes). Once the node templates have been created, we will proceed to create the Kubernetes cluster nodes customized to the appropriate template.

Generating the Rancher API Credentials
Using the Pulumi Rancher package requires a Rancher API key setup process to be able to create and update Rancher resources from Pulumi. Credentials will be obtained from Rancher by following these steps:
1 - In the user dropdown select API & Keys.

2 - On the screen API & Keys press the button Add Key.
3 - Leave the default values ​​in the dialog box.


4 - Copy the generated values. These values ​​will be the only time they are displayed and they cannot be retrieved again.

If these credentials are lost, new ones will have to be generated since the generated password (secret key) cannot be recovered.

Once these steps have been followed, we will have generated the access credentials for interaction with Rancher through its API. This is what we needed to be able to manage Rancher resources from Pulumi.

Creation and initial configuration of the project
From within an empty directory created for the project we will create the Pulumi project with the command pulumi new. Since there are no templates defined for Rancher, we will create the project with
$ pulumi new typescript
Next:
  • We will accept the name of the project, whose default value is that of the directory in which it is located.
  • We will complete the description with K8s cluster configuration.
  • We will accept the stack name (dev).
Once the project creation options have been accepted, the project's dependencies will be installed and a few moments later the project will be ready to run.
As a result we will have a project with the following structure:
├── .gitignore
├── index.ts #1
├── package.json #2
├── Pulumi.yaml #3
└── tsconfig.json
  1. File where we will include the resources to deploy.
  2. Dependencies file.
  3. Configuration of the name and description of the project and execution runtime
Installing the Rancher2 package
We will install the Rancher2 package for Pulumi with the command
npm install @pulumi/rancher2
This will update the project's dependencies file package.json.
Configuration of access credentials to Rancher from Pulumi
With the values ​​obtained in Generation of the Rancher API credentials, they must be passed to Pulumi. Following the Rancher credential configuration steps in Pulumi there are two options:
  • Set the environment variables RANCHER_URL, RANCHER_ACCESS_KEY and RANCHER_SECRET_KEY. In our case it would be
export RANCHER_URL=https://ranchitodesa.stic.ual.es/v3
export RANCHER_ACCESS_KEY=token-tj6vf
export RANCHER_SECRET_KEY=8pq6g2dpf7njgmncglqrsfggrbwx57.........
  • Set the configuration in the project stack to facilitate collaborative work.
$ pulumi config set rancher2:apiUrl https://ranchitodesa.stic.ual.es/v3
$ pulumi config set rancher2:accessKey token-tj6vf --secret
$ pulumi config set rancher2:secretKey 8pq6g2dpf7njgmncglqrsfggrbwx57......... --secret
The configured credentials are not sent to pulumi.com.
If we follow this alternative, after making this configuration a file is generated Pulumi.dev.yaml with the configuration made. For security reasons, this information should not be sent indiscriminately to public repositories
Creating the node templates
As the Kubernetes cluster that we are going to create uses OpenStack as an infrastructure provider and each particular installation has its own values, we must create a template in which all these particular parameters are indicated, from the URL, access credentials, and others, up to the particular data of the project from which the resources are going to be consumed, going through the data of the image to be used for the nodes that use the template.
NodeTemplateTemplates are created with the module resource rancher2. It is enough to indicate a name for the template and a JSON for the options. In our case we will include the description of the template.
const sistemas_ssh_key = fs.readFileSync('/Users/manolo/.ssh/os-sistemas','utf8');

// Create a new rancher2 Large Ubuntu Node Template up to Rancher 2.1.x

const ubuntuLargeTemplate = new rancher2.NodeTemplate("ubuntu-18-04-large-pulumi", {
    openstackConfig: {
        authUrl: "http://openstack.stic.ual.es:5000/v3", //1
        availabilityZone: "nova", //2
        domainName: "default", //3
        endpointType: "publicURL",
        flavorName: "large", //4
        floatingIpPool: "ual-net", //5
        imageName: "Ubuntu 18.04 LTS", //6
        keypairName: "os-sistemas", //7
        netName: "Sistemas-prod-net", //8
        password: "xxxx", //9
        privateKeyFile: fs.readFileSync('/Users/manolo/.ssh/os-sistemas', 'utf8'), //10
        region: "RegionOne", //11
        secGroups: "default", //12
        sshPort: "22", //13
        sshUser: "ubuntu", //14
        tenantName: "Sistemas-prod", //15
        username: "sistemas", //16
        userDataFile: fs.readFileSync('./ubuntu-node-setup.sh', 'utf8') //17
    },
    description: "Ubuntu 18.04 LTS large Pulumi",
});
  1. Authentication URL of the particular OpenStack installation
  2. Availability Zone Name
  3. OpenStack domain name
  4. Flavor name for instances using this template
  5. Name of the external network that provides floating IP addresses to instances using this template
  6. Name of the image to use as a base for instances using this template
  7. Name of the OpenStack user public key file to inject into instances using this template
  8. OpenStack project network to which instances using this template will connect
  9. OpenStack user password
  10. Location of the private key file to be able to interact with the instance. Another (less secure) option is to pass the contents of the private key file.
  11. OpenStack Region
  12. OpenStack security groups applicable to the instance
  13. SSH access port to the created instance
  14. SSH username of the OS image used on the Kubernetes cluster nodes
  15. Name of the project that provides the resources to the Kubernetes cluster
  16. OpenStack username that owns the project that provides the resources to the Kubernetes cluster
  17. Option to pass a script to configure the instance (eg to make a particular security configuration of our organization)
This defines a template assigned to a constant ubuntuLargeTemplate. Assigning the created resource to a constant or a variable allows you to manipulate it later. In our case it will be used to create Kubernetes cluster nodes.
Analogously we will create a similar template but with a flavor mediumfor nodes that need less resources.
// Create a new rancher2 Medium Ubuntu Node Template up to Rancher 2.1.x

const ubuntuMediumTemplate = new rancher2.NodeTemplate("ubuntu-18-04-medium-pulumi", {
    openstackConfig: {
        authUrl: "http://openstack.stic.ual.es:5000/v3",
        availabilityZone: "nova",
        domainName: "default",
        endpointType: "publicURL",
        flavorName: "medium", //1
        floatingIpPool: "ual-net",
        imageName: "Ubuntu 18.04 LTS",
        keypairName: "os-sistemas",
        netName: "Sistemas-prod-net",
        password: "xxxx",
        privateKeyFile: fs.readFileSync('/Users/manolo/.ssh/os-sistemas', 'utf8'),
        region: "RegionOne",
        secGroups: "default",
        sshPort: "22",
        sshUser: "ubuntu",
        tenantName: "Sistemas-prod",
        username: "sistemas",
        userDataFile: fs.readFileSync('./ubuntu-node-setup.sh', 'utf8')

    },
    description: "Ubuntu 18.04 LTS medium Pulumi",
});
  1. Flavor mediumfor less demanding nodes
Cluster Creation
Cluster Clusters are created with the module resource rancher2. It is enough to indicate a name for the cluster and a JSON for the options. In our case, we will include the description of the template, and the configuration of RKE (Rancher Kubernetes Engine), the CNCF-certified Kubernetes distribution that runs on Docker. RKE makes it easy to create the Kubernetes cluster. In our case we will configure the network plugin and use OpenStack as the cloud provider . In the OpenStack configuration you have to indicate values ​​related to username, password, URL, project, network, and others. More information can be found in the Rancher OpenStack cloud provider documentation . There is also useful information in Using OpenStack as an infrastructure provider in Rancher
Below is the code to create a cluster using OpenStack as the infrastructure provider
// Create a new rancher2 RKE Cluster
const cluster = new rancher2.Cluster("cluster-pulumi", {
    description: "Cluster Pulumi Desa",
    rkeConfig: {
        network: {
            plugin: "canal",
        },

        cloudProvider: {
            name: "openstack",
            openstackCloudProvider: {
                blockStorage: {
                    ignoreVolumeAz: true,
                    trustDevicePath: false
                },
                global: {
                    authUrl: "http://openstack.stic.ual.es:5000/v3", //1
                    domainName: "default", //2
                    tenantName: "Sistemas-prod", //3
                    password: "sistemas", //4
                    username: "xxxx", //5
                },
                loadBalancer: {
                    createMonitor: false,
                    floatingNetworkId: "30bf68df-xxxxxx", //6
                    manageSecurityGroups: false,
                    monitorMaxRetries: 0,
                    subnetId: "aabe1065-xxxxxx", //7
                    useOctavia: false
                },
                metadata: {
                    requestTimeout: 0
                },
                route: {}
            }
        },

    },
    clusterAuthEndpoint: {
        enabled: true
    }
});
  1. Authentication URL
  2. Domain to which the user belongs
  3. Name of the project providing the infrastructure
  4. Password
  5. Username
  6. ID of the external network, the one that provides the floating IPs
  7. ID of the subnet of the project that provides the infrastructure

Network and subnet ids are found in the Network | NetworksOpenStack menu. It is enough to select the corresponding network and all its properties will appear, its id is one of it. In the case of subnets, the ID is used, not the Network ID

Creating cluster node pools
Clusters are made up of groups of nodes ( node ​​pools ). A nodegroup is a set of nodes defined according to a template and can have one or more of these roles: etcd, control and worker . 

Considerations for the number of nodes in a role
The cluster will need to have nodes that support etcd, control, and worker functions . These functions can be on separate nodes or shared between nodes. In any case, the following restrictions must be met in the cluster regarding the number of nodes of each role:
etcd : 1, 3 or 5
control : 1 or more
worker : 1 or more
In our case we will create a node group for each function and have the functions separated into different node groups.
Node pools are created with the NodePoolmodule resource rancher2. It is enough to indicate a name for the node group and a JSON for the options. In our case we will include the cluster to which they apply, the prefix that will be used to name the instances of the OpenStack node group, the OpenStack template to use, the number of nodes that the group will have and the activated roles ( etcd, control and worker ). By default, roles are disabled.
Below is the code of the three groups of nodes to configure for the cluster created above. Each group of nodes corresponds to each of the roles ( etcd, control and worker ). As it is just for an example, each node group only has one node ( quantity: 1) defined.

// Create a new control rancher2 Node Pool //1
const controlNodePool = new rancher2.NodePool("control-node-pool-pulumi-desa", {
    clusterId: cluster.id, //2
    hostnamePrefix: "control-pulumi-desa", //3
    nodeTemplateId: ubuntuLargeTemplate.id, //4
    quantity: 1, //5
    controlPlane: true, //6
});

// Create a new etcd rancher2 Node Pool //7
const etcdNodePool = new rancher2.NodePool("etcd-node-pool-pulumi-desa", {
    clusterId: cluster.id,
    hostnamePrefix: "etcd-pulumi-desa",
    nodeTemplateId: ubuntuMediumTemplate.id, //8
    quantity: 1,
    etcd: true //9
});

// Create a new worker rancher2 Node Pool //10
const workerNodePool = new rancher2.NodePool("worker-node-pool-pulumi-desa", {
    clusterId: cluster.id,
    hostnamePrefix: "worker-pulumi-desa",
    nodeTemplateId: ubuntuLargeTemplate.id, //11
    quantity: 1,
    worker: true //12
});

  1. Control Role Node Group
  2. Id of the group to which the group of nodes belongs
  3. Prefix for the virtual machines corresponding to the nodes in the group
  4. Template in controllarge group nodes
  5. 1 node in the group
  6. Activation of the control role in the group of control nodes
  7. etcd role node group
  8. Template on etcdmedium group nodes
  9. Activating the etcd role in the etcd nodegroup
  10. Worker role node group
  11. Template on workerlarge group nodes
  12. Activating the worker role in the worker node pool

Delete a project
Removing resources from a project is done with the command pulumi destroy.
The command pulumi destroyis a very dangerous operation as it completely removes resources from a deployment. Make sure you run it in the right directory first .
Deleting resources from a project with pulumi destroydeletes the resources but still preserves the operation history and stack configuration. To remove the stack entirely, and not just its resources, run the following command

$ pulumi stack rm <<nombre-del-stack (p.e. dev)>>
Troubleshooting corrupt deployments
If, while a deployment is being carried out, it is canceled before it is finished, a message will appear indicating that the deployment has resources with pending operations. As reasons for the cancellation we can have cancellations of the process by the user, cuts in the network or an execution error in the Pulumi CLI.

Since Pulumi has no way of knowing whether an operation it has initiated succeeded or failed, this means resources may have been created and Pulumi is unaware. Therefore, the next thing to do is to cancel the deployment with the following command
$ pulumi cancel
After this, the stack must be exported and re-imported
$ pulumi stack export | pulumi stack import
To update the status of the stack, execute the command
$ pulumi refresh
Restore from a corrupt state
Sometimes the only way to restore a corrupt deployment is to review the deployment file and see what operations are left pending. To do this, we will first export the deployment status to a file (pe state.json) with the following command:
$ pulumi stack export --file state.json
Follow these steps:

  1. Go to the pending_operationsdeployment file section (pe state.json) and delete these resources from the infrastructure in case they had been created and Pulumi was not aware of them.
  2. Edit that file and remove the contents of the array pending_operationsleaving the array empty. Save the changes to the file.
  3. Re-import where pending operations no longer appear

$ pulumi stack import --file state.json
For more troubleshooting information see the Recovering from an Interrupted Update and Manually Editing Your Deployment sections of the Pulumi troubleshooting guide.

No comments

Powered by Blogger.