Virtual Private Cloud API: Console Clients
Today we’ll be talking about our Virtual Private Cloud. More specifically, we’ll be looking at the OpenStack API and how it can be used to access our VPC from console clients.
Creating Users
Before using the API, we must first first create a user and add them to the project. In the Virtual Private Cloud menu, we click the “Users” tab.
Our user list will open:
Our list is currently empty. We click Create user, enter the username we wish to create in the new window, and then click Create. Passwords are generated automatically. We can view the new user’s status by clicking the icon next to the username:
The user can be granted access to projects by clicking “Add to project” and choosing the desired projects from the list.
The user will then be displayed in the control panel for those projects. There will be a link above the username which will grant user-specific access to the project’s resources from the browser.
By following this link, we can log into the project as the created user. In the new control panel, click the “Access” tab and download the RC file (a script that console clients can use for authorization in Identity API v3).
Installation
Additional software has to be installed before we can configure our system. In this article, our installation instructions apply to Ubuntu 14.04. Commands may be different in other operating systems; instructions for CentOS 7.0 and CentOS 6.5 can be found directly in the control panel (under the “Access” tab).
We install the following packages:
apt-get update apt-get install curl python-pip python-dev git libxml2-dev libxslt1-dev python-keystoneclient python-heatclient python-novaclient python-glanceclient python-neutronclient
Then we install software which is either missing from the Ubuntu repository or outdated:
$ pip install git+https://github.com/openstack/python-cinderclient $ pip install cliff --upgrade $ pip install python-openstackclient
Afterwards, we execute the command:
$ source rc.sh
The program will request a password. Reenter the password for the user you accessed the external panel with.
Now we can get to work.
Viewing Network Information
We need a network to create virtual machines. First, we view a list of available networks:
$ neutron net-list
+--------------------------------------+------------------+-----------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-----------------------------------------------------+ | 1c037362-487f-4103-a73b-6cba3f5532dc | nat | b7be542a-2eef-465e-aacd-34a0c83e6afa 192.168.0.0/24 | | ab2264dd-bde8-4a97-b0da-5fea63191019 | external-network | 102a9263-2d84-4335-acfb-6583ac8e70aa | | | | aa9e4fc4-63b0-432e-bcbd-82a613310acb | | fce90252-7d99-4fc7-80ae-ef763d12938d | newnetwork | 5a1a68f9-b885-47b7-9c7e-6f0e08145e3b 192.168.1.0/24 | +--------------------------------------+------------------+-----------------------------------------------------+
Each network has an identification number (in the ID column); this will have to be entered when we create a new server.
Network configurations can be activated through the GUI.
Server Operations
We pull up the list of available images:
$ glance image-list
+--------------------------------------+--------------------------+-------------+------------------+-------------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------------------------+-------------+------------------+-------------+--------+ | 552bc246-5ae7-4b48-9a64-e1e881a64cab | CentOS 6 32-bit | raw | bare | 219152384 | active | | 708a7642-80ab-486e-a031-e6b6a652004c | CentOS 6 32-bit | raw | bare | 2147483648 | active | | 978d81c0-c508-412d-9847-fb8cec294410 | CentOS 6 64-bit | raw | bare | 263192576 | active | | ee5d5bb7-8a31-467a-8bbf-f6f5bbb79334 | CentOS 6 64-bit | raw | bare | 2147483648 | active | | 647bce00-5f29-49fe-9e83-8b33cb188d17 | CentOS 7 64-bit | raw | bare | 2147483648 | active | | dff9df74-b7b3-44b0-92f3-40cb4dfd9a94 | CoreOS | qcow2 | ovf | 449839104 | active | | 3eda89b9-9ce0-47b7-9907-a2978d88632e | CoreOS | qcow2 | ovf | 413007872 | active | | d2033c50-e8f4-4ff6-9c21-cade02007f34 | Debian 7 (Wheezy) 32-bit | raw | bare | 10485760 | active | | ba78ce9b-f800-4fb2-ad85-a68ca0f19cb8 | Debian 7 (Wheezy) 32-bit | raw | bare | 2147483648 | active | | b2c8bc6a-dbb8-4a1a-ab8e-c63f5f2b9bdf | Debian 7 (Wheezy) 64-bit | raw | bare | 11534336 | active | | 18a18569-389c-4144-82ae-e5e85862fca4 | Debian 7 (Wheezy) 64-bit | raw | bare | 2147483648 | active | | 8c3233c9-25cd-4181-a422-aa24032255cc | OpenSUSE 13.1 32-bit | raw | bare | 74448896 | active | | d965d37c-6796-40bd-8966-d0d7f7f41313 | OpenSUSE 13.1 32-bit | raw | bare | 3221225472 | active | | b77015d0-3eba-4841-9d02-7e9d606d343a | OpenSUSE 13.1 64-bit | raw | bare | 76546048 | active | | b20a1e1a-3c81-4d13-926f-eb39546b9b36 | OpenSUSE 13.1 64-bit | raw | bare | 3221225472 | active | | c168e0e5-c01e-44ec-be36-1c10e2da94a5 | selectel-rescue-initrd | ari | ari | 13665966 | active | | 0b117761-4ab5-40d7-a610-127d1e10206f | selectel-rescue-kernel | aki | aki | 5634192 | active | | c2fce974-4aeb-473a-9475-176207c3f293 | Ubuntu 12.04 LTS 32-bit | raw | bare | 22020096 | active | | eeb9143c-1500-4086-8025-307bc96fc467 | Ubuntu 12.04 LTS 32-bit | raw | bare | 2147483648 | active | | dbdd5cb3-f73f-4d98-85e9-eb333463e431 | Ubuntu 12.04 LTS 64-bit | raw | bare | 26214400 | active | | c1231800-9423-4018-b138-af8860ea8239 | Ubuntu 12.04 LTS 64-bit | raw | bare | 2147483648 | active | | c61cfa0d-3f7b-489f-8e55-4904a0d6e830 | Ubuntu 14.04 LTS 32-bit | raw | bare | 26214400 | active | | fbb2bb25-5058-4f06-85c8-6d3ca268e686 | Ubuntu 14.04 LTS 32-bit | raw | bare | 2147483648 | active | | e024042b-80f5-4eea-ae29-733ae32f65e6 | Ubuntu 14.04 LTS 64-bit | raw | bare | 33554432 | active | | f10ab2a9-478d-4401-9371-384bd9731156 | Ubuntu 14.04 LTS 64-bit | raw | bare | 2147483648 | active | | 6a4b53e6-109c-4fc0-9535-b97bc2912de6 | windows_2012_final | raw | bare | 10737418240 | active | +--------------------------------------+--------------------------+-------------+------------------+-------------+--------+
We choose the image we want and copy its ID; this will be needed when we create our server.
Now we configure our server (in OpenStack terminology, this is referred to as a creating the “flavor”):
$ nova flavor-create auto
The hard disk space should be set to 0 in this command. In our system, a Cinder volume is connected to the machine as a root (system) disk. This decision was made based on its flexibility: unlike local disks (in Amazon terms, “instance stores”), Cinder volumes can be disconnected and connected to other machines.
The auto key in this command means the server configuration ID will be generated automatically:
+------------------------------------+------+---------+----+---------+----+-----+-----------+---------+ |ID |Name |Memory,MB|Disk|Ephemeral|Swap|VCPUs|RXTX_Factor|Is_Public| +------------------------------------+------+---------+----+---------+----+-----+-----------+---------+ |fc275dcc-f51a-48c3-b0c3-c3fdd300dd65|myflvr| 1024 | 0 | 0 | | 2 | 1.0 | True | +------------------------------------+------+---------+----+---------+----+-----+-----------+---------+
We should copy this ID we’ll also need it when creating our server.
Next, we create an SSH key:
$ nova keypair-add $ chmod 600 This command prints a private key to a file, which can be used to connect to virtual machines via SSH (the connection command in this case will look like: ssh -i ). If you already have a private-public key pair, you can enter the public key as an argument:
$ nova keypair-add pub-key <path/to/public/key>
For example:
$ nova keypair-add myKey --pub-key /home/user/.ssh/id_rsa.pub
SSH-keys can also be added from the control panel (from the “Access” tab in project properties).
Having chosen an image and configuration, we now create our server.
$ nova boot --flavor --nic net-id= --key-name myKey --block-device id=,source=image,dest=volume,size=10,device=vda,bootindex=0
After the boot command, we indicate the server name, the image ID, the configuration ID, and SSH key.
When we created our control panel and images, we tried to exclude the possibility of transferring unencrypted passwords across the network. This is why our images don’t require a password, but a hash that’s generated in the control panel.
To access the machine from the console, simply enter your login and click Enter. Access via SSH is only possible using a key.
Disk Operations
A new disk can be created and connected to the server using the command:
$ cinder create --name <size, GB>
If the disk is successfully created, a table listing its properties will be printed in the console:
+-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | attachments | [] | | availability_zone | ru-1a | | bootable | false | | created_at | 2014-10-23T11:10:15.000000 | | description | None | | encrypted | False | | id | 76586803-9cfd-4f75-931d-0a4dee98e496 | | metadata | {} | | name | mydisk | | size | 5 | | snapshot_id | None | | source_volid | None | | status | creating | | user_id | 6f862e43d4a84f359928948fb658d695 | | volume_type | default | +-------------------+--------------------------------------+
To connect a disk to the server, we copy its ID from the table and execute the command:
$ nova volume-attach
Creating and Assigning an IP Address
To create an external IP address that the server will be accessible from over the Internet, we enter the command:
$ neutron floatingip-create external-network
We then assign the address to the server:
$ nova floating-ip-associate
Power Management and Rebooting
There are two types of server reboots: software (a soft or warm reboot) and hardware (by cutting off its power; a cold or hard reboot).
We enter the following to perform a soft reboot:
$ nova reboot
and for a hard reboot:
$ nova reboot --hard
Power can be managed with the start and stop command:
#enable a specific server $ nova start #disable a specific server $ nova stop
Network Port Operations
We create a new network:
$ neutron net-create
When the command has been executed, a table with information about the network will be printed in the console:
+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | admin_state_up | True | | id | add73ca5-6120-43bd-bb56-d1d8d71d21ac | | name | localnet | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | d15391cc95474b1ab6bd81fb2a73bc5c | +----------------+--------------------------------------+
A subnet can be created in this network using the following command:
$ neutron subnet-create --name 192.168.1.0/24 192.168.1.0/24
(The network ID can be taken from the previous printout).
Then, we create a network port:
$ neutron port-create
and attach it to the server:
$ nova interface-attach --port-id
Conclusion
This article is only a short introduction to OpenStack API. If you have any questions about console clients, we will be happy to answer them in the comments below.
We’ll take a more detailed look at other API features in future articles.