The Art of Virtual Conducting in OpenStack: Working with Heat
Today we’re going to look at how we can build a ready-made infrastructure from a virtual device using the OpenStack orchestration module, Heat.
Heat requires the python-heat package, which is included in the repositories of most Linux systems. It can be installed from the PyPI repository using the PIP utility. Installation instructions can be found under Access in the control panel (Control Panel -> Virtual Private Cloud -> Projects -> Access).
Key Concepts
Before getting into the specifics of working with Heat, we want to clarify what exactly “stacks” and “templates” are.
A stack is a set of cloud resources (machines, logic volumes, networks, etc.) that are interconnected to make up a single structure.
A template is a stack description and is usually a specially formatted text file. The template contains a description of the resources and their connections. Resources can be listed in any order: stacks are assembled automatically. Previously assembled stacks can be used as descriptions for other templates, letting you create nested stacks.
Let’s look a at an example of a template structure and how it’s written. We’ll create a stack made up of two servers, a local network, and a router that connects to a public network.
Template Format
Templates come in several formats. We’ll be using HOT format.
It was created specially for Heat and uses a fairly simple and easy-to-understand syntax. The format is based on YAML, so it’s important to mind spaces in indents and hierarchy when editing texts.
The CFN format (AWS CloudFormation) is also supported to provide compatibility with Amazon EC2 templates.
Template Structure
We’ll use the following template to create a stack:
heat_template_version: 2013-05-23 description: Basic template for two servers, one network and one router parameters: key_name: type: string description: Name of keypair to assign to servers for ssh authentication public_net_id: type: string description: UUID of public network to outer world server_flavor: type: string description: UUID of virtual hardware configurations that are called flavors in openstack private_net_name: type: string description: Name of private network (L2 level) private_subnet_name: type: string description: Name of private network subnet (L3 level) router_name: type: string description: Name of router that connects private and public networks server1_name: type: string description: Custom name of server1 virtual machine server2_name: type: string description: Custom name of server2 virtual machine image_centos7: type: string description: UUID of glance image with centos 7 distro image_debian7: type: string description: UUID of glance image with debian 7 distro resources: private_net: type: OS::Neutron::Net properties: name: { get_param: private_net_name } private_subnet: type: OS::Neutron::Subnet properties: name: { get_param: private_subnet_name } network_id: { get_resource: private_net } allocation_pools: - start: "192.168.0.10" end: "192.168.0.254" cidr: "192.168.0.0/24" enable_dhcp: True gateway_ip: "192.168.0.1" router: type: OS::Neutron::Router properties: name: { get_param: router_name } external_gateway_info: { "enable_snat": True, "network": { get_param: public_net_id }} router_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource: router } subnet_id: { get_resource: private_subnet } server1: type: OS::Nova::Server properties: name: { get_param: server1_name } block_device_mapping: - volume_size: 5 volume_id: { get_resource: "server1_disk" } device_name: "/dev/vda" config_drive: "False" flavor: { get_param: server_flavor } image: { get_param: image_centos7 } key_name: { get_param: key_name } networks: - port: { get_resource: server1_port } server1_disk: type: OS::Cinder::Volume properties: name: server1_disk image: { get_param: image_centos7 } size: 5 server1_port: type: OS::Neutron::Port properties: network_id: { get_resource: private_net } fixed_ips: - subnet_id: { get_resource: private_subnet } server1_floating_ip: type: OS::Neutron::FloatingIP properties: floating_network_id: { get_param: public_net_id } port_id: { get_resource: server1_port } depends_on: router_interface server2: type: OS::Nova::Server properties: name: { get_param: server2_name } block_device_mapping: - volume_size: 5 volume_id: { get_resource: "server2_disk" } device_name: "/dev/vda" config_drive: "False" flavor: { get_param: server_flavor } image: { get_param: image_debian7 } key_name: { get_param: key_name } networks: - port: { get_resource: server2_port } server2_disk: type: OS::Cinder::Volume properties: name: server2_disk image: { get_param: image_debian7 } size: 5 server2_port: type: OS::Neutron::Port properties: network_id: { get_resource: private_net } fixed_ips: - subnet_id: { get_resource: private_subnet } outputs: server1_private_ip: description: private ip within local subnet of server1 with installed Centos 7 distro value: { get_attr: [ server1_port, fixed_ips, 0, ip_address ] } server1_public_ip: description: floating_ip that is assigned to server1 server value: { get_attr: [ server1_floating_ip, floating_ip_address ] } server2_private_ip: description: private ip within local subnet of server2 with installed Debian 7 distro value: { get_attr: [ server2, first_address ] }
Let’s take a closer look at this structure.
The template is made up of several blocks. The first tells us the template version and format. Every new release of OpenStack supports its own set of properties and values, which are gradually changing. Our example uses version 2013-05-23. This supports all the features implemented in Icehouse.
heat_template_version: 2013-05-23 description: > Basic template of two servers, one network and one router
The second block gives us a general description of the template and its values:
parameters: key_name: type: string description: Name of keypair to assign to servers for ssh authentication public_net_id: type: string description: UUID of public network to outer world default: 98863f6c-638e-4b48-a377-01f0e86f34ae server_flavor: type: string description: UUID of virtual hardware configurations that are called flavors in openstack private_net_name: type: string description: The Name of private network (L2 level) private_subnet_name: type: string description: the Name of private subnet (L3 level) router_name: type: string description: The Name of router that connects private and public networks server1_name: type: string description: Custom name of server1 virtual machine server2_name: type: string description: Custom name of server2 virtual machine image_centos7: type: string description: UUID of glance image with centos 7 distro image_debian7: type: string description: UUID of glance image with debian 7 distro
Then we list a few additional parameters which will be sent to Heat when we create the stack. We can set the keys for SSH connections to our new server in the key_name parameter. The server_flavor and public_net_id parameters act as identifiers (UUID) for the “hardware” configuration of the virtual machine and public network. We can also assign names to the new devices and machines.
resources: private_net: type: OS::Neutron::Net properties: name: { get_param: private_net_name } private_subnet: type: OS::Neutron::Subnet properties: name: { get_param: private_subnet_name } network_id: { get_resource: private_net } allocation_pools: - start: "192.168.0.10" end: "192.168.0.254" cidr: "192.168.0.0/24" enable_dhcp: True gateway_ip: "192.168.0.1" router: type: OS::Neutron::Router properties: name: { get_param: router_name } external_gateway_info: { "enable_snat": True, "network": { get_param: public_net_id}} router_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource: router } subnet_id: { get_resource: private_subnet } server1: type: OS::Nova::Server properties: name: { get_param: server1_name } block_device_mapping: - volume_size: 5 volume_id: { get_resource: "server1_disk" } device_name: "/dev/vda" config_drive: "False" flavor: { get_param: server_flavor } image: { get_param: image_server1 } key_name: { get_param: key_name } networks: - port: { get_resource: server1_port } server1_disk: type: OS::Cinder::Volume properties: name: server1_disk image: { get_param: image_server1 } size: 5 server1_port: type: OS::Neutron::Port properties: network_id: { get_resource: private_net } fixed_ips: - subnet_id: { get_resource: private_subnet } server1_floating_ip: type: OS::Neutron::FloatingIP properties: floating_network_id: { get_param: public_net_id } port_id: { get_resource: server1_port } depends_on: router_interface server2: type: OS::Nova::Server properties: name: { get_param: server2_name } block_device_mapping: - volume_size: 5 volume_id: { get_resource: "server2_disk" } device_name: "/dev/vda" config_drive: "False" flavor: { get_param: server_flavor } image: { get_param: image_server2 } key_name: { get_param: key_name } networks: - port: { get_resource: server2_port } server2_disk: type: OS::Cinder::Volume properties: name: server2_disk image: { get_param: image_server2 } size: 5 server2_port: type: OS::Neutron::Port properties: network_id: { get_resource: private_net } fixed_ips: - subnet_id: { get_resource: private_subnet }
The next block describes the resources we’ve created: networks, router, servers, etc. In this part of the template, we define the local network (private_net), the subnet, its range of addresses, and can enable DHCP support.
The next step is to create a router and its interface; the router will connect to the local network we’ve created via this interface. Then we see a list of servers. Each server should have a port and disk. The first server should also have a floating IP address (floting_ip) set, which an outside address from the public network can use to associate with “grey” addresses from the local network. Subsequent servers don’t need this.
server1_floating_ip: type: OS::Neutron::FloatingIP properties: floating_network_id: { get_param: public_net_id } port_id: { get_resource: server1_port } depends_on: router_interface
Pay close attention to how parameters and resources are used when describing new devices. Above, we showed the description of a floating IP address for the first server. We need to define the UUID of the public network where it will get the floating IP address (floating_network_id) and the UUID for the server port (port_id) that will connect to the address. For the get_param function, we indicate that the value should be taken from the public_net_id (we’ll discuss how to use this parameter below). We still don’t have an identifier for the first server’s port though; this will only become available after the server has been created. The get_resource function indicates that the value of server1_port will be used as the port_id UUID as soon as it’s created.
Resource DELETE failed: Conflict: Router interface for subnet 8958ffad-7622-4d98-9fd9-6f4423937b59 on router 7ee9754b-beba-4301-9bdd-166117c5e5a6 cannot be deleted, as it is required by one or more floating IPs.
According to this message, the router can’t be deleted because there is a floating IP address attached to the network the router connects to. As one would expect, before you can delete a stack, the IP address must first be deleted, then the router and network connected to it. The problem is that all of these components’ (neutron, cinder, nova, glance) resources are independent from one another; they are separate entities for which a connection and dependencies are created.
When creating a stack, Heat usually defines the order in which resources need to be created and the connections between them need to be established. These connections are also taken into consideration when a stack is deleted: they define the order resources are deleted. However, just like in our example above, errors do sometimes occur. We clearly defined that the floating ip address is attached to the router and its interface using the depends_on directive. Because of this connection, the IP address will now be assigned after the router and interface have been created. The order will be reversed when resources are deleted: first the floating IP address will be deleted, then the router and its interface.
In this last template fragment, we describe the parameters we need for our virtual devices. This way their values are set only after the stack has been created.
outputs: server1_private_ip: description: private ip address within local subnet of server 1 with installed Centos7 distro value: { get_attr: [ server1_port, fixed_ips, 0, ip_address]} server1_public_ip: description: floating ip that is assigned to server1 server value: { get_attr: [ server1_floating_ip, floating_ip_address ]} server2_private_ip: description: private ip address within local subnet of server2 with installed Debian7 distro value: { get_attr: [ server2, first_address ]}
In this fragment, we specify that we want the following values for resources that were created while the stack was being assembled: the address of the first server on the local network, the first server’s public address (floating IP address), and the address of the second server on the local network. We’ve entered a short description and the desired value for each parameter. We did this using the get_attr function, which requires two values: the first being the name of the resource, the second its attributes.
Pay attention to the different ways we can get a local network address from the first and second server. They are both perfectly acceptable. The difference is that in the first case, we refer to Neutron (remember: server1_port is set to OS::Neutron::Port) and the first IP address is taken from the fixed_ips attribute. In the second case, we refer to Nova (server2 is set to OS::Nova::Server) and the first_address attribute.
OpenStack components like Neutron and Cinder showed up after Nova. This is why Nova used to perform many more functions, including disk and network management. As Neutron and Cinder developed, these functions were no longer necessary, but were left in for consistency. Nova’s policies are slowly being revised, and several functions are being advertised as obsolete. It’s possible the first_address attribute will no longer be supported in the near future.
value: { get_attr: [ server1_port, fixed_ips, 0, ip_address]} value: { get_attr: [ server2, first_address ]}
More information on templates and their makeup can be found in the official manual.
Creating a Stack
Now that we’ve prepared a template, we’ll check it for any syntax errors and against the standard:
$ heat template-validate -f publication.yml
If the template was properly compiled, then we’ll get the following response in json format:
{ "Description": "Basic template of two servers, one network and one router\n", "Parameters": { "server2_name": { "NoEcho": "false", "Type": "String", "Description": "", "Label": "server2_name" }, "private_subnet_name": { "NoEcho": "false", "Type": "String", "Description": "the Name of private subnet", "Label": "private_subnet_name" }, "key_name": { "NoEcho": "false", ...
Then we move directly on to creating the stack:
$ heat stack-create TESTA -f testa.yml -P key_name="testa" \ -P public_net_id="ab2264dd-bde8-4a97-b0da-5fea63191019" \ -P server_flavor="1406718579611-8007733592" \ -P private_net_name=localnet -P private_subnet_name="192.168.0.0/24" \ -P router_name=router -P server1_name=Centos7 -P server2_name=Debian7 \ -P image_server1="CentOS 7 64-bit" \ -P image_server2="ba78ce9b-f800-4fb2-ad85-a68ca0f19cb8"
Manually transferring parameters to Heat is far from ideal; mistakes can easily be made. To get around this, we create an additional file that follows the format of the primary template, but only contains the main parameters.
parameters: key_name: testa public_net_id: ab2264dd-bde8-4a97-b0da-5fea63191019 server_flavor: myflavor private_net_name: localnet private_subnet_name: 192.168.0.0/24 router_name: router server1_name: Centos7 server2_name: Debian7 image_server1: CentOS 7 64-bit image_server2: ba78ce9b-f800-4fb2-ad85-a68ca0f19cb8
This makes it easier to create a stack using the Heat console utility.
$ heat stack-create TESTA -f testa.yml -e testa_env.yml +--------------------------------------+------------+--------------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+--------------------+----------------------+ | 96d37fd2-52e8-4b59-bf42-2ce72566e03e | TESTA | CREATE_IN_PROGRESS | 2014-12-17T15:17:17Z | +--------------------------------------+------------+--------------------+----------------------+
We can use the standard set of OpenStack utilities to find the values of the requested Heat parameters. For example, we can use Neutron to find the public network identifier public_net_id:
$ neutron net-list +--------------------------------------+------------------+-----------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-----------------------------------------------------+ | 168bb122-a00a-4e34-bcc9-3bd0b417ee2b | localnet | 256647b7-7b73-4534-8a79-1901c9b25527 192.168.0.0/24 | | ab2264dd-bde8-4a97-b0da-5fea63191019 | external-network | 102a9263-2d84-4335-acfb-6583ac8e70aa | | | | aa9e4fc4-63b0-432e-bcbd-82a613310acb | +--------------------------------------+------------------+-----------------------------------------------------+
Using this same method and the right utilities, we can find the names or identifiers for server_flavor, image_server1 and image_server2.
Stack Operations
After we’ve created a stack, we need to be sure that there haven’t been any errors and find out which IP addresses have been assigned to the server (starting with the public IP address of the first server).
A list of all created stacks can be retrieved using the heat-list command. The printout will include information on the status of each stack:
$ heat stack-list +--------------------------------------+------------+-----------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+-----------------+----------------------+ | e7ad8ef1-921d-4e70-a203-20dbc32d4a02 | TESTA | CREATE_COMPLETE | 2014-12-17T18:30:54Z | | ab5159d2-08ad-47a2-a964-a2c3425eca8f | TESTNODE | CREATE_FAILED | 2014-12-17T18:39:38Z | +--------------------------------------+------------+-----------------+----------------------+
As you can see from the printout, we didn’t properly set the UUID of the local network that our server port connects to. This is why we got an error. Errors usually occur due to a lack of available resources (every project has a limit to the number of cores, RAM, etc.).
If a stack is successfully created, then the “outputs” section of the stack-show printout will give us the parameters we’re interested in.
+----------------------+----------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +----------------------+----------------------------------------------------------------------------------------------------------------------------------+ | capabilities | [] | | creation_time | 2014-12-17T15:17:17Z | | description | Basic template of two servers, one network and one | | | router | | disable_rollback | True | | id | 96d37fd2-52e8-4b59-bf42-2ce72566e03e | | links | https://api.selvpc.ru/orchestration/v1/58ad5a5408ad4ad5864f260308884539/stacks/TESTA/96d37fd2-52e8-4b59-bf42-2ce72566e03e (self) | | notification_topics | [] | | outputs | [ | | | { | | | "output_value": "192.168.0.10", | | | "description": "private ip within local subnet of server2 with installed Debian 7 distro", | | | "output_key": "server2_private_ip" | | | }, | | | { | | | "output_value": "192.168.0.13", | | | "description": "private ip within local subnet of server1 with installed Centos 7 distro", | | | "output_key": "server1_private_ip" | | | }, | | | { | | | "output_value": "95.213.154.134", | | | "description": "floating_ip that is assigned to server1 server", | | | "output_key": "server1_public_ip" | | | } | | | ] | | parameters | { | | | "server2_name": "Debian7", | | | "image_centos7": "CentOS 7 64-bit", | | | "OS::stack_id": "96d37fd2-52e8-4b59-bf42-2ce72566e03e", | | | "OS::stack_name": "TESTA", | | | "private_subnet_name": "192.168.0.0/24", | | | "key_name": "testa", | | | "server1_name": "Centos7", | | | "public_net_id": "ab2264dd-bde8-4a97-b0da-5fea63191019", | | | "private_net_name": "localnet", | | | "router_name": "router", | | | "server_flavor": "myflavor", | | | "image_debian7": "d3e1be2a-e0fc-4cfc-ac07-35c9706f02cc" | | | } | | stack_name | TESTA | | stack_status | CREATE_COMPLETE | | stack_status_reason | Stack CREATE completed successfully | | template_description | Basic template of two servers, one network and one | | | router | | timeout_mins | None | | updated_time | None | +----------------------+----------------------------------------------------------------------------------------------------------------------------------+
In most cases, the heat stack-show printout is extremely detailed and just plain huge. Finding a minute yet important detail (like the IP address of the first server) in the printout can be incredibly tedious. If we’re only interested in the first server’s floating IP address, then we can retrieve it using the following command, where we specify the stack name and public IP address:
$ heat output-show TESTA server1_public_ip "95.213.154.192"
Stacks can easily be deleted using the heat stack-delete command:
$ heat stack-delete TESTA +--------------------------------------+------------+--------------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+--------------------+----------------------+ | e7ad8ef1-921d-4e70-a203-20dbc32d4a02 | TESTA | DELETE_IN_PROGRESS | 2014-12-17T18:30:54Z | +--------------------------------------+------------+--------------------+----------------------+
In situations where you have to temporarily free up system resources, you can suspend a stack using the heat action-suspend command and resume it later with the heat action-resume command.
We’ve only looked at the most frequently used (from our point of view) stack operations (and didn’t even mention managing individual resources), events, updating stacks on the fly, and some other features. More detailed information can be found in the official documentation or using the command heat help.
Conclusion
In this article, we got to know the main principles behind the OpenStack orchestration module, Heat. This gives us an extra abstraction level when working with clouds and spares us from performing the majority of routine activities.
Heat, of course, is not limited to only these functions. We didn’t talk about the key ability to transfer user data (user_data) to a created machine, which will then execute that data the first time it’s booted up. Strictly speak, Heat doesn’t transfer data to machines on its own, but through Nova. At the expense of being able to describe connections between resources, Heat doesn’t limit the number of machines that can execute data.
For example, if you need to create several machines, one of them will act as the database server, and the others will connect to it by IP address. By using templates, you don’t have to worry about the order of creating machines or their network configurations. As soon as the appropriate resources have been created, all of the necessary values, including the IP address of the database server, will be sent in user_data.
To take full advantage of these features, we need to understand that data is transferred to a machine and then processed. We’ll describe this in more detail in a future article.
Software-Based Routing with VyOS
A modern business’ stability and efficiency depend largely on the continuous operation of its IT infrastructure. Unfortunately, maintenance and operating expenses can be quite high, especially for small and medium-sized businesses.
To cut costs, many companies are now looking to IT outsourcing: instead of purchasing their own equipment, companies rent from a data center and hire professionals to maintain it.
For this to be beneficial from an organizational and financial standpoint, the technical side of things needs to be thoroughly thought out.
When transferring part of an IT infrastructure to a data center, it’s important to decide how all of the resources will connect to a single network. Solutions offered by the leading manufacturers (Juniper, Cisco, etc.) are often too pricey for small and medium-sized businesses. Keeping this in mind, it’s no surprise that there has been a growing interest in free open-source products. In terms of functionality, most of these work the same as their for-pay counterparts, if not better.
A key member of the corporate network is the router–a special network device that connects network elements together and exchanges packets between them. Routers can be both hardware and software-based. When trying to build an IT infrastructure while minimizing expenditures, a software-based router may be the best option.
In this article, we’ll be looking at the VyOS router, which is distributed under free license, and how it can be used to solve some real-life tasks.
VyOS: General Information
VyOS is a fork of the now defunct Vyatta network OS (now owned by Brocade). It was first released in December 2013 under the codename Hydrogen.
Its latest major release, Helium, was published in September 2014. The VyOS command line interface (CLI) is similar to the CLI for Juniper Networks devices.
VyOS has many different features, including:
- IPv4 and IPv6 firewalls with p2p traffic filtering;
- network address translation (NAT);
- IPv4 and IPv6 DHCP servers;
- an intrusion detection system;
- load balancing and backup channels;
- secondary routing with connection status table synchronization;
- VPN (IPsec, L2TP/IPsec, PPTP, OpenVPN);
- traffic analysis (NetFlow and sFlow);
- web proxy and URL filtering.
Like Vyatta, VyOS is built on Debian. This lets us add functions by installing additional deb packages.
Installation
For detailed instructions, look here. VyOS has two installation methods: install system and install image. The first option (install system) just installs the operating system onto a disk. When you choose install image, every version of VyOS gets saved to a separate directory, letting you rollback to a previous version if any issues arise (this is the recommended installation method).
So, we boot from our disk, log into the system (login – vyos, password – vyos), and run the install image command. During the installation, we’ll need to answer the standard Linux installation questions. Once the installation is complete, we run reboot, and again start up the system and log onto VyOS with the login and password we set during installation.
Practical Example
We’ll look at VyOS’ features by running some practical examples. The scenario: an organization is made up of three geographically distributed branches: one in Moscow, one in St. Petersburg, and the third in Habarovsk. They have four servers in a data center in St. Petersburg. Our task: allow only one server to connect directly to the Internet; the others should be connected to a local network and access the Internet through the router. Each of our branches will be using a different connection type: L2TP/IPsec, PPTP, and OpenVPN.
Our network will look like this:
[img]
Configuring Nodes
Even though we’ve already installed VyOS, we still don’t have a network, so we’ll start by configuring one in the KVM console.
We configure the first network interface (external) to have the address 95.213.170.75. Then, we enter configuration mode by entering the command configure.
set interfaces ethernet eth0 address 95.213.170.75/29 set interfaces ethernet eth0 description "WAN"
Here, we’ve assigned interface eth0 an IP address and a description to avoid confusion later on.
Then we set the default gateway address and DNS address:
set system gateway-address 95.213.170.73 set system name-server 188.93.16.19
We’ve entered the address for the Selectel DNS in St. Petersburg, but any will do.
Next, we configure our SSH service, which we’ll later use for configuring future nodes:
set service ssh port "22"
The logic behind VyOS is almost the same as with Juniper Networks devices: before any changes come into effect, the commit command has to be executed. For changes to stay in effect after rebooting, we have to run the save command. This is where the VyOS command logic differs from JunOS: in Juniper’s network OS, changes don’t have to be saved after running commit.
We connect to our router via SSH then log onto the system by entering the login and password we set during installation. Once we’ve logged on, we configure the internal network interface eth1. This is the local network interface that the servers in the data center connect to. We assign it the address 10.0.10.1 with netmask /24 and add a description:
set interfaces ethernet eth1 address 10.0.10.1/24 set interfaces ethernet eth1 description "LAN"
If we want our machine to recognize network resource names, we have to configure the DNS. We can configure a DNS forwarder, which will forward requests to the name resolution assigned in the configuration. Configuring this component is fairly simple:
set service dns forwarding cache-size "0" set service dns forwarding listen-on "eth1" set service dns forwarding name-server "188.93.16.19" set service dns forwarding name-server "188.93.17.19"
In the first command, we set the cache size of the DNS forwarder for saving records. Since there’s really no point for us to save any DNS records, we set this to zero. The second command sets the interface the DNS forwarder will listen on. We’ve entered an internal interface so that our DNS forwarder won’t be accessible to the whole Internet. The third and fourth commands establish which addresses requests will be forwarded to. Our example uses Selectel’s DNS, but again, you can set any servers you want.
All of the components we need for our local network are ready to go. Now we configure the firewall.
In VyOS, we can use firewall rulesets and assign them names. In our example, we use a set of rules called OUTSIDE for our external network and INSIDE for our internal network.
We want to permit all connections “from the inside out” for our external interface, and for our internal interface, all connections “from the inside out” and SSH access.
set firewall name OUTSIDE default-action "drop" set firewall name OUTSIDE rule 1 action "accept" set firewall name OUTSIDE rule 1 state established "enable" set firewall name OUTSIDE rule 1 state related "enable"
These commands allow all pre-established and related connections.
Then we set the INSIDE firewall rules:
set firewall name INSIDE default-action 'drop' set firewall name INSIDE rule 1 action 'accept' set firewall name INSIDE rule 1 state established 'enable' set firewall name INSIDE rule 1 state related 'enable' set firewall name INSIDE rule 2 action 'accept' set firewall name INSIDE rule 2 icmp type-name 'echo-request' set firewall name INSIDE rule 2 protocol 'icmp' set firewall name INSIDE rule 2 state new 'enable' set firewall name INSIDE rule 3 action 'drop' set firewall name INSIDE rule 3 destination port '22' set firewall name INSIDE rule 3 protocol 'tcp' set firewall name INSIDE rule 3 recent count '4' set firewall name INSIDE rule 3 recent time '60' set firewall name INSIDE rule 3 state new 'enable' set firewall name INSIDE rule 31 action 'accept' set firewall name INSIDE rule 31 destination port '22' set firewall name INSIDE rule 31 protocol 'tcp' set firewall name INSIDE rule 31 state new 'enable'
In the first line, we assign the default action; here, it’s “drop” (the firewall will drop all packets that fall outside any of our rules). The first rule mimics the rules we wrote for the external interface. The second rule allows all ICMP packets; this is needed for pinging the router if anything goes wrong. The third rule is responsible for SSH connections: we’re only allowing TCP traffic from port 22.
Now we apply these new rules to the appropriate interfaces:
set interfaces ethernet eth0 firewall in name 'OUTSIDE' set interfaces ethernet eth1 firewall out name 'INSIDE'
Take a look at the in and out parameters. These describe traffic relative to the router, either incoming or outgoing, and are in no way related to the names of the firewall rulesets.
We won’t forget to apply and save our configuration using the commit and save commands.
Configuring a VPN
As we’ve already said, our branches will be using different kinds of VPN connections. We’ll start by configuring L2TP/IPSec (for more information, see here):
set vpn ipsec ipsec-interfaces interface eth0 set vpn ipsec nat-traversal enable set vpn ipsec nat-networks allowed-network 0.0.0.0/0 set vpn l2tp remote-access outside-address 95.213.170.75 set vpn l2tp remote-access client-ip-pool start 10.0.10.20 set vpn l2tp remote-access client-ip-pool stop 10.0.10.30 set vpn l2tp remote-access ipsec-settings authentication mode pre-shared-secret set vpn l2tp remote-access ipsec-settings authentication pre-shared-secret set vpn l2tp remote-access authentication mode local set vpn l2tp remote-access authentication local-users username password
In the first three commands we configure IPSec: we define the interface where packets will be sent, we enable NAT traversal, and allow NAT for all networks. The next commands are for L2TP. It’s fairly obvious what they’re responsible for, so we’ll just take a look at a few parameters:
- outside-address: sets the VPN server’s external address;
- pre-shared-secret: sets the password which will later be used when configuring the VPN on client devices;
- authentication mode local: sets the authentication type. Our example uses a local database for authentication, but a RADIUS server can also be used for centrally managing user accounts.
In the last line, we create a user and define their password. Afterwards, we add some rules to the firewall to allow L2TP/IPSec traffic.
set firewall name INSIDE rule 4 action 'accept' set firewall name INSIDE rule 4 protocol 'esp' set firewall name INSIDE rule 41 action 'accept' set firewall name INSIDE rule 41 destination port '500' set firewall name INSIDE rule 41 protocol 'udp' set firewall name INSIDE rule 42 action 'accept' set firewall name INSIDE rule 42 destination port '4500' set firewall name INSIDE rule 42 protocol 'udp' set firewall name INSIDE rule 43 action 'accept' set firewall name INSIDE rule 43 destination port '1701' set firewall name INSIDE rule 43 ipsec 'match-ipsec' set firewall name INSIDE rule 43 protocol 'udp'
Rule 4 permits ESP traffic, which our installed IPSEC tunnel works on, 42 allows NAT traversal, and 43 enables port 1701, which L2TP works on.
Now let’s look at configuring our second VPN connection and set up an OpenVPN server. We start by copying the easy-rsa files to the directory /config/easy-rsa2 to avoid losing them when updating the system:
cp -rv /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /config/easy-rsa2
If necessary, we can change the default variables for certificates. For example:
nano /config/easy-rsa2/vars export KEY_COUNTRY="RU" export KEY_CITY="Saint-Petersburg" export KEY_ORG="Selectel" export KEY_EMAIL="t-rex@selectel.ru"
This data will be displayed in the fields of certificates we generate. We jump to the /config/easy-rsa2/ directory and load the variables:
cd /config/easy-rsa2/ source ./vars
We delete all of the keys:
./clean-all
Then we generate the certificate authority files:
./build-ca ./build-dh
and the server certificate:
./build-key-server t-rex-server
Next, we copy the keys to the appropriate directories:
cp /config/easy-rsa2/keys/ca.crt /config/auth/ cp /config/easy-rsa2/keys/dh1024.pem /config/auth/ cp /config/easy-rsa2/keys/t-rex-server.key /config/auth/ cp /config/easy-rsa2/keys/t-rex-server.crt /config/auth/
Once we’ve done this, we prepare client files for connecting to the server:
./build-key branch-msk
and immediately copy them to a separate folder:
cd /config/easy-rsa2/keys mkdir branch-msk cp branch-msk* branch-msk/ cp ca.crt branch-msk/
The generated files are necessary for connecting clients to the server, which means they will have to be transferred to the client’s side. This can be done using any SCP client: WinSCP for Windows or the standard console client scp for Linux.
Then we configure the server:
set interfaces openvpn vtun0 mode 'server' set interfaces openvpn vtun0 server name-server '10.0.10.1' set interfaces openvpn vtun0 server push-route '10.0.10.0/24' set interfaces openvpn vtun0 server subnet '10.1.10.0/24' set interfaces openvpn vtun0 tls ca-cert-file '/config/auth/ca.crt' set interfaces openvpn vtun0 tls cert-file '/config/auth/t-rex-server.crt' set interfaces openvpn vtun0 tls dh-file '/config/auth/dh1024.pem' set interfaces openvpn vtun0 tls key-file '/config/auth/t-rex-server.key' set service dns forwarding listen-on vtun0 commit save
Take a look at the last command: we forward name resolution requests to the DNS forwarder that we configured earlier. We’d also like to point out that with OpenVPN, we first used a separate network for building the actual tunnel, and then routed it to the local network with our servers. This is due to the nature of the protocol. We’ll write more detailed information on this in a future article.
Configuring a PPTP Server
Now we’ll configure the last of our VPN connections–PPTP. Of course, PPTP is not very secure and thus not often used for transferring confidential information; however, it has many applications for providing remote access. Practically every device with a network connection has a PPTP client.
From our example, we can see that PPTP is configured almost the same way as L2TP:
set vpn pptp remote-access authentication mode local set vpn pptp remote-access authentication local-users username password set vpn pptp remote-access client-ip-pool start 10.0.10.31 set vpn pptp remote-access client-ip-pool stop 10.0.10.40 set vpn pptp remote-access dns-server server-1 188.93.17.19 set vpn pptp remote-access outside-address 95.213.170.75 In the first command, we set user authentication mode to local. If you have a RADIUS server, you can choose radius authentication mode, which can be used to manage user accounts much more comfortably. Then we create local users and set an IP address range and DNS data for clients. The last command sets the address of the interface our server will listen on. Then we apply and save the configuration:
commit save
The server is now ready for clients.
All that’s left is to let traffic from the local network out. This is how Internet access is provided to servers in the local network and to users connected to our router from the branches:
set nat source rule 1 outbound-interface 'eth0' set nat source rule 1 source address '10.0.10.0/24' set nat source rule 1 translation address masquerade
Conclusion
Now we’re ready: we’ve built a network to accomplish our task with the given conditions. One of the servers (located in St. Petersburg) acts as a router, the others are connected to it via a local network. The branches’ routers can access local network resources over a secure VPN connection.
In this brief overview, we described only the basics of building a small corporate network. In future articles, we’ll talk more in depth about VyOS’ capabilities and teach you how to more flexibly configure a firewall and port forwarding, allow traffic from protocols commonly used in corporate networks, and also look at the following topics:
- organizing GRE tunnels;
- L2TPv3 protocols;
- QoS;
- zone-based firewalls;
- performance tuning a network interface;
- VRRP.