- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
Get Started With Terraform and Cisco Modeling Labs
Infrastructure as Code (IaC) is a hot topic these days, and the IaC tool of choice is Terraform by HashiCorp. Terraform is a cloud provisioning product that provides infrastructure for any application. You can refer to a long list of providers for any target platform.
Terraform’s list of providers now includes Cisco Modeling Labs (CML) 2, so we can use Terraform to control virtual network infrastructure running on CML2. Keep reading to learn how to get started with Terraform and CML, from the initial configuration through its advanced features.
How does Terraform work?
Terraform uses code to describe the desired state of the required infrastructure and track this state over the infrastructure’s lifetime. This code is written in HashiCorp Configuration Language (HCL). If it changes, Terraform figures out all the differences (state changes) to update the infrastructure and help reach the new state. Eventually, when the infrastructure isn’t needed anymore, Terraform can destroy it.
A Terraform provider offers resources (things that have state) and data sources (read-only data without state).
In CML2 terms, examples include:
- Resources: Labs, nodes, links
- Data sources: Labs, nodes, and links, as well as available nodes and image definitions, available bridges for external connectors, and user lists and groups, etc.
NOTE: Currently, only a few data sources are implemented.
Getting started with Terraform and CML
To get started with Terraform and CML, you’ll need the following:
Define and initialize a workspace
First, we’ll create a new directory and change it as follows:
$ mkdir tftest $ cd tftest
All the configuration and state required by Terraform stays in this directory.
The code snippets presented need to go into a Terraform configuration file, typically a file called main.tf. However, configuration blocks can also be spread across multiple files, as Terraform will combine all files with the .tf extension in the current working directory.
The following code block tells Terraform that we want to use the CML2 provider. It will download and install the latest available version from the registry at initialization. We add this to a new file called main.tf:
terraform { required_providers { cml2 = { source = "registry.terraform.io/ciscodevnet/cml2" } } }
With the provider defined, we can now initialize the environment. This will download the provider binary from the Hashicorp registry and install it on the local computer. It will also create various files and a directory that holds additional Terraform configuration and state.
$ terraform init Initializing the backend... Initializing provider plugins... - Finding latest version of ciscodevnet/cml2... - Installing ciscodevnet/cml2 v0.4.1... - Installed ciscodevnet/cml2 v0.4.1 (self-signed, key ID A97E6292972408AB) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. $
Configure the provider
The CML2 terraform provider needs credentials to access CML2. These credentials are configured as shown in the following example. Of course, address, username and password need to match the actual environment:
provider "cml2" { address = "https://cml-controller.cml.lab" username = "admin" password = "supersecret" # skip_verify = true }
The skip_verify is commented out in the example. You might want to uncomment it to work with the default certificate that is shipped with the product, which is signed by the Cisco CML CA. Consider installing a trusted certificate chain on the controller.
While the above works OK, it’s not advisable to configure clear-text credentials in files that might end up in source code management (SCM). A better approach is to use environment variables, ideally in combination with some tooling like direnv. As a prerequisite, the variables need to be defined within the configuration:
variable "address" { description = "CML controller address" type = string default = "https://cml-controller.cml.lab" } variable "username" { description = "cml2 username" type = string default = "admin" } variable "password" { description = "cml2 password" type = string sensitive = true }
NOTE: Adding the “sensitive” attribute ensures that this value is not printed in any output.
We now can create a direnv configuration to insert values from the environment into our provider configuration by creating a .envrc file. You can also achieve this by manually “sourcing” this file using source .envrc. The benefit of direnv is that this automatically happens when changing into the directory.
TF_VAR_address="https://cml-controller.cml.lab" TF_VAR_username="admin" TF_VAR_password="secret" export TF_VAR_username TF_VAR_password TF_VAR_address
This decouples the Terraform configuration files from the credentials/dynamic values so that they can easily be added to SCM, like Git, without exposing sensitive values, such as passwords or addresses.
Define the CML2 lab infrastructure
With the basic configuration done, we can now describe our CML2 lab infrastructure. We have two options:
- Import-mode
- Define-mode
Import-mode
This imports an existing CML2 lab YAML topology file as a Terraform lifecycle resource. This is the “one-stop” solution, defining all nodes, links and interfaces in one go. In addition, you can use Terraform templating to replace properties of the imported lab (see below).
Import-mode example
Here’s a simple import-mode example:
resource "cml2_lifecycle" "this" { topology = file("topology.yaml") }
The file topology.yaml will be imported into CML2 and then started. We now need to “plan” the change:
$ terraform plan Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # cml2_lifecycle.this will be created + resource "cml2_lifecycle" "this" { + booted = (known after apply) + id = (known after apply) + lab_id = (known after apply) + nodes = { } -> (known after apply) + state = (known after apply) + topology = (sensitive value) } Plan: 1 to add, 0 to change, 0 to destroy. $
Then apply it (-auto-approve is a short-cut and should be handled with care):
$ terraform apply -auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create
Terraform will perform the following actions: # cml2_lifecycle.this will be created + resource "cml2_lifecycle" "this" { + booted = (known after apply) + id = (known after apply) + lab_id = (known after apply) + nodes = { } -> (known after apply) + state = (known after apply) + topology = (sensitive value) } Plan: 1 to add, 0 to change, 0 to destroy. cml2_lifecycle.this: Creating... cml2_lifecycle.this: Still creating... [10s elapsed] cml2_lifecycle.this: Still creating... [20s elapsed] cml2_lifecycle.this: Creation complete after 25s [id=b75992ec-d345-4638-a6fd-2c0b640a3c22] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. $
We can now look at the state:
$ terraform show # cml2_lifecycle.this: resource "cml2_lifecycle" "this" { booted = true id = "b75992ec-d345-4638-a6fd-2c0b640a3c22" nodes = { # (3 unchanged elements hidden) } state = "STARTED" topology = (sensitive value) } $ terraform console > keys(cml2_lifecycle.this.nodes) tolist([ "0504773c-5396-44ff-b545-ccb734e11691", "22271a81-1d3a-4403-97de-686ebf0f36bc", "2bccca61-d4ee-459a-81bd-96b32bdaeaed", ]) > cml2_lifecycle.this.nodes["0504773c-5396-44ff-b545-ccb734e11691"].interfaces[0].ip4[0] "192.168.122.227" > exit $
Simple import example with a template
This example is similar to the one above, but this time we import the topology using templatefile(), which allows templating of the topology. Assuming that the CML2 topology YAML file starts with
lab: description: "description" notes: "notes" timestamp: 1606137179.2951126 title: ${toponame} version: 0.0.4 nodes: - id: n0 [...]
then using this HCL
resource "cml2_lifecycle" "this" { topology = templatefile("topology.yaml", { toponame = "yolo lab" }) }
will replace the title: ${toponame} from the YAML with the content of the string “yolo lab” at import time. Note that instead of a string literal, it’s perfectly fine to use a variable like var.toponame or other HCL features!
Define-mode usage
Define-mode starts with the definition of a lab resource and then adds node and link resources. In this mode, resources will only be created. If we want to control the runtime state (e.g., start/stop/wipe the lab), then we need to link these elements to a lifecycle resource.
Here’s an example:
resource "cml2_lab" "this" { } resource "cml2_node" "ext" { lab_id = cml2_lab.this.id nodedefinition = "external_connector" label = "Internet" configuration = "bridge0" } resource "cml2_node" "r1" { lab_id = cml2_lab.this.id label = "R1" nodedefinition = "alpine" } resource "cml2_link" "l1" { lab_id = cml2_lab.this.id node_a = cml2_node.ext.id node_b = cml2_node.r1.id }
This will create the lab, the nodes, and the link between them. Without further configuration, nothing will be started. If these resources should be started, then you’ll need a CML2 lifecycle resource:
resource "cml2_lifecycle" "top" { lab_id = cml2_lab.this.id elements = [ cml2_node.ext.id, cml2_node.r2.id, cml2_link.l1.id, ] }
Here’s what this looks like after applying the combined plan.
NOTE: For brevity, some attributes are omitted and have been replaced by […]:
$ terraform apply -auto-approve Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # cml2_lab.this will be created + resource "cml2_lab" "this" { + created = (known after apply) + description = (known after apply) + groups = [ ] -> (known after apply) + id = (known after apply) [...] + title = (known after apply) } # cml2_lifecycle.top will be created + resource "cml2_lifecycle" "top" { + booted = (known after apply) + elements = [ + (known after apply), + (known after apply), + (known after apply), ] + id = (known after apply) + lab_id = (known after apply) + nodes = { } -> (known after apply) + state = (known after apply) } # cml2_link.l1 will be created + resource "cml2_link" "l1" { + id = (known after apply) + interface_a = (known after apply) + interface_b = (known after apply) + lab_id = (known after apply) + label = (known after apply) + link_capture_key = (known after apply) + node_a = (known after apply) + node_a_slot = (known after apply) + node_b = (known after apply) + node_b_slot = (known after apply) + state = (known after apply) } # cml2_node.ext will be created + resource "cml2_node" "ext" { + configuration = (known after apply) + cpu_limit = (known after apply) + cpus = (known after apply) [...] + x = (known after apply) + y = (known after apply) } # cml2_node.r1 will be created + resource "cml2_node" "r1" { + configuration = (known after apply) + cpu_limit = (known after apply) + cpus = (known after apply) [...] + x = (known after apply) + y = (known after apply) } Plan: 5 to add, 0 to change, 0 to destroy. cml2_lab.this: Creating... cml2_lab.this: Creation complete after 0s [id=306f3ebf-c819-4b89-a99d-138a58ca7195] cml2_node.ext: Creating... cml2_node.r2: Creating... cml2_node.ext: Creation complete after 1s [id=32f187bf-4f53-462a-8e36-43cd9b6e17a4] cml2_node.r2: Creation complete after 1s [id=5d59a0d3-70a1-45a1-9b2a-4cecd9a4e696] cml2_link.l1: Creating... cml2_link.l1: Creation complete after 0s [id=a083c777-abab-47d2-95c3-09d897e01d2e] cml2_lifecycle.top: Creating... cml2_lifecycle.top: Still creating... [10s elapsed] cml2_lifecycle.top: Still creating... [20s elapsed] cml2_lifecycle.top: Creation complete after 22s [id=306f3ebf-c819-4b89-a99d-138a58ca7195] Apply complete! Resources: 5 added, 0 changed, 0 destroyed. $
The elements lifecycle attribute is needed to tie the individual nodes and links into the lifecycle resource. This ensures the correct sequence of operations based on the dependencies between the resources.
NOTE: It’s not possible to use both import and elements at the same time. In addition, when importing a topology using the topology attribute, a lab_id cannot be set.
Advanced usage
The lifecycle resource has a few more configuration parameters that control advanced features. Here’s a list of those parameters and what they do:
- configs is a map of strings. The keys are node labels, and the values are node configurations. When these are present, the provider will check for all node labels to see whether they are matching and, if they are, replace the node’s configuration with the provided configuration. This allows you to “inject” configurations into a topology file. The base topology file could have no configurations, in which case the actual configurations would be provided via an example file(“node1-config”) or a literal configuration string, as shown here:
configs = { "node-1": file("node1-config") "node-2": "hostname node2" }
- staging defines the node start sequence when the lab is started. Node tags are used to achieve this. Here’s an example:
staging = { stages = ["infra", "core", "site-1"] start_remaining = true }
The given example ensures that nodes with the tag “infra” are started first. The provider waits until all nodes with this tag are marked as “booted.” Then, all nodes with the tag “core” are started, and so on. If, after the end of the stage list, there are still stopped nodes, then the start_remaining flag determines whether they should remain stopped or should be started as well (the default is true, e.g., they will all be started).
- state defines the runtime state of the lab. By default this is STARTED, which means the lab will be started. Options are STARTED, STOPPED, and DEFINED_ON_CORE
– STARTED is the default
– STOPPED can be set if the lab is currently started, otherwise it will produce a failure
– DEFINED_ON_CORE is wiping the lab if the current state is either STARTED or STOPPED
- timeouts can be used to set different timeouts for operations. This might be necessary for big labs that take a long time to start. The defaults are set to 2h .
- wait is a boolean flag, which defines whether the provider should wait for convergence (for example, when the lab starts, and this is set to false, then the provider will start the lab but will not wait until all nodes within the lab are “ready”).
- id is a read-only computed attribute. A UUIDv4 will be auto-generated at create time and assigned to this ID.
CRUD operations
Of the four basic operations of resource management, create, read, update, and delete (CRUD), the previous sections primarily described the create and read aspect. But Terraform can also deal with update and delete.
Plans can be changed, new resources can be added, and existing resources can be removed or changed. This is always a result of editing/changing your Terraform configuration files and then having Terraform figure out the required state changes via the terraform plan followed by a terraform apply once you are satisfied with those changes.
Updating resources
It is possible to update resources, but not every combination is seamless. Here are a few things to consider:
- Only a few node attributes can be changed seamlessly; examples are coordinates (x/y), label, and configuration
- Some plan changes will re-create resources. For example, running nodes will be destroyed and restarted is if the node definition is changed
Deleting resources
Finally, a terraform destroy will delete all created resources from the controller.
Data Sources
As opposed to resources, data sources don’t hold any state. They are used to read data from the controller. This data can then be used to reference elements in other data sources or resources. A good example, although not yet implemented, would be a list of available node- and image-definitions. By reading these into a data source, the HCL defining the infrastructure could take available definitions into account.
There are, however, a few data sources implemented:
- Node: Reads a node by providing a lab and a node ID
- Lab: Reads a lab by providing either a lab ID or a lab title
Output
All data in resources and data sources can be used to drive output from Terraform. A useful example in the context of CML2 is the retrieval of IP addresses from running nodes. Here’s the way to do it, assuming that the lifecycle resource is called this and also assuming that R1 is able to acquire an IP address via an external connector:
cml2_lifecycle.this.nodes["0504773c-5396-44ff-b545- ccb734e11691"].interfaces[0].ip4[0]
Note, however, that output is also calculated when resources might not exist, so the above will give an error due to the node not being found or the interface list being empty. To guard against this, you can use HCL:
output "r1_ip_address" { value = ( cml2_lifecycle.top.nodes[cml2_node.r1.id].interfaces[0].ip4 == null ? "undefined" : ( length(cml2_lifecycle.top.nodes[cml2_node.r1.id].interfaces[0].ip4) > 0 ? cml2_lifecycle.top.nodes[cml2_node.r1.id].interfaces[0].ip4[0] : "no ip" ) ) }
Output:
r1_ip_address = "192.168.255.115"
Conclusion
The CML2 provider fits nicely into the overall Terraform eco-system. With the flexibility HCL provides and by combining it with other Terraform providers, it’s never been easier to automate virtual network infrastructure inside CML2. What will you do with these new capabilities? We’re curious to hear about it! Let’s continue the conversation on the Cisco Learning Network’s Cisco Modeling Labs Community.
Single users can purchase Cisco Modeling Labs – Personal and Cisco Modeling Labs – Personal Plus licenses from the Cisco Learning Network Store. For teams, explore CML – Enterprise and CML – Higher Education licensing and contact us to learn how Cisco Modeling Labs can power your NetDevOps transformation.
Join the Cisco Learning Network today for free.
Follow Cisco Learning & Certifications
Twitter | Facebook | LinkedIn | Instagram
Use #CiscoCert to join the conversation.
References
- https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
- https://github.com/CiscoDevNet/terraform-provider-cml2
- https://registry.terraform.io/providers/CiscoDevNet/cml2
- https://developer.hashicorp.com/terraform/language
- https://direnv.net/
- Image by Dall-E (https://labs.openai.com/)
Share: