Getting started with the OpenStack Manila File share service by Canonical
Nov 26, 2016 · 18 minute readCategory: work
Tags: juju
Words: 4591
The Manila file share service is the file Share service project for OpenStack. This post explores the Juju manila charms that provide the Manila file share service, and provides a practical set of instructions to get your own mini OpenStack system up and running (on a powerful machine) so that you can explore the service.
I work at Canonical in the OpenStack Charms team, working on software to make deploying and managing an OpenStack system as easy, convenient and flexible as possible.
Note that, unless otherwise instructed, the charms install from the distro archives, which for xenial (16.04LTS) is version 2.0.0 of the manila software.
Background
OpenStack is essentially a collection of software projects, that, taken together, can be used to make a private cloud. It's used at Rackspace, and lot's of enterprises use it to power their own private clouds. There are various projects in OpenStack. For example, Neutron is the networking project, and Nova is the compute project, or the thing that launches and manages virtual machines.
Now, there's a whole bunch of other stuff, such as storage (Cinder & Ceph), image management (Glance), but there's way too many things to go into here. So what I'm really saying is is that this page is probably not the place to start with OpenStack!
So Manila provides NFS/CIFS type shares to VMs and provides an API and CLI to manage those shares, ensure they can be accessed by tenant (project in Keystone V3) machines.
The Manila Juju charms provide an easy way to model the manila file share service in a Juju deployed and maintained OpenStack systems.
Currently, at the time of writing, two manila juju charms are available:
- Manila - the main manila charm providing the service.
- Manila Generic - the back end configuration charm.
Manila concepts
Manila consists of 4 services:
- manila-api: this provides the endpoint to for the API, and thus the endpoint that the CLI connects to. This is registered to keystone (the identity service) by the manila charm.
- manila-scheduler: whilst the API receives and responds to commands on the API, the scheduler actually does the work of setting up, maintaining, and tearing down the shares, in terms of coordinating the activities of the actions. e.g. to create a share, to setup networking for the share, etc.
- manila-share: whilst manila-scheduler manages the changes that are done to share back-ends, the manila-share process actually hooks into the back-end drivers to perform the actions. There can be multiple back-end drivers configured, and they are implemented in the manila-share process
- manila-data: manila-data is new, and its role in the manila ecosystem of services is to perform 'data' operations. e.g. Replication, copying data, etc.
For more details on the manila architecture please go here.
So, as an example, creating a share using a back-end driver will involve the manila-api responding to the request, then the manila-scheduler taking that request and sending it to the appropriate manila-share service (that is configured with the appropriate back-end), and then the manila-share service communicating with the back-end device (or driver) and creating the share.
Back ends
When configuring the File Share service, it is required to declare at least one back end. A back end is a service that provides shares to consumers. e.g. a NetApp appliance, a GlusterFS array, a Ceph storage array, etc.
Manila manages back ends to provide the file shares to the tenant machines. It does this by sending the back ends commands to create/manage their shares, and then (optionally) can instruct the network (via neutron) to make the necessary changes to the network to get the share reachable from the tenant's machines.
In order to support many file share back ends, manila uses backend plugins to abstract concrete implementations from the manila core. In this post, we are dealing with the generic back end.
Manila can be configured to run in a single-node configuration or across multiple nodes. Manila can be configured to provision shares from one or more back ends. The OpenStack File Share service allows you to offer file-share services to users of an OpenStack installation.
Generic Back end
In this post, I'll be covering the first back end that the Juju Charms for OpenStack provide, which is the generic back end. Essentially, this is an NFS server that uses the Cinder block storage service as a backing for the share.
Overtime, we expect more back end charms to become available. Manila supports a variety of back ends, and I'll (hopefully) be covering how to write a new back end configuration charm for manila in a future post.
Practical:
This is where it gets a bit more fun. We'll launch a (part of) an OpenStack system with the manila file share and then connect a tenant to the share.
This OpenStack system will consist of:
- A 3 node Ceph cluster as a storage cluster for cinder
- Cinder, the block storage component of OpenStack -- this uses Ceph to provide a block device to Manila
- Glance, the image service, which will store the images that are used to launch the Ubuntu and manila instances.
- Keystone, the identity service component, which manages users, groups, tenants (now called projects) and permissions in the system.
- Nova-compute, the 'compute' service, which provides machine instances on which images are used to provide a compute node in OpenStack.
- Neutron networking, which provides all of the network management between different parts of an OpenStack system.
- Percona-cluster, (an HA version of MySQL) to provide database services to the other OpenStack services, but not to tenants.
- Rabbitmq, an AMQP message broker, which provides the asynchronous backbone for components/services in OpenStack. This is how, say, the OpenStack client 'talks' to the different components.
And of course:
- Manila, the file share service.
Things you'll need to follow along
I think that learning by doing is the best way to understand a new technology, so I'm going to suggest that you play along. For that you'll need:
- a x64 based computer with 32GB+ memory, a recent CPU with at least 8 cores. I tested it on a MacBook Pro with 16GB RAM and a 4 core i5 and it was only just possible.
- Ubuntu 16.04LTS (Xenial), or later, up to date.
- ZFS set up. This is needed for efficient containers. I'm running luks underneath on my block device, but for me, as I have everything encrypted on the laptop.
- LXD set up properly: This page has the details you need.
- Juju 2.1 or higher.
- Git archive of tools used in this post
There are two deploy options presented here; a fire-and-forget standard deploy, and a more laptop friendly, sequential, deploy script.
When I ran the normal, parallel, deploy on my 2011 MacBook Pro (i5, 4 core, 16GB of RAM), it topped out at 13GB RAM used, with 5-6GB of SWAP (on an SSD), had a load average of 30-35 during the deploy, and took over an hour for the deploy to settle. Definitely, go and do something else, if you're going to test it locally. If you have a natty i7 Xeon, 32GB RAM, with 8+ cores, you're obviously going to be a bit quicker. However, I wanted to make sure it was possible to test it locally!
The load average at idle was around 16, but it still pegged the 4 cores to nearly 50-70%, with frequent 100% across all 4 cores. Also the number of processes used by the containers was around 800! This, was only to show how you can have a play with it.
If you can, run this on a small cluster of LXD machines or, even better, a MaaS cluster.
However, I'm going to assume that you're on a single, but powerful, machine and so:
Setup ZFS
Set up ZFS so that LXD can really efficiently share common elements of the root file system between the 10 containers that will be setup. This makes it really fast to start a container, and efficiently use your disk. I followed this guide, but set it up on an encrypted luks partition.
However, if you don't want to setup ZFS on a partition or drive, you can set it up in a file inside your current file system. In this case allocate at least 20GB for the storage. But you'll need a command similar to:
$ sudo zpool create lxdpool <file>
To create the zpool. The lxdpool
will then be used to when setting up LXD.
Setup LXD
Use this LXD guide set up the LXD service on your machine. I used the following for my network:
Network and IP: 172.16.1.1/24 DHCP range: 172.16.1.2 -> 172.16.1.200
The addresses 172.168.1.2:200
will be the addresses that will be allocated to the containers by LXD when Juju creates machines. 172.16.1.1
is the default gateway for the network. We'll map the .201
to .254
addresses to the OpenStack network for floating IP addresses.
Use the lxdpool
as your ZFS pool when running the sudo lxd init
command.
Bootstrap the juju controller
First, the Juju controller needs to be bootstrapped so that it can then manage the model that will be deployed.
$ juju bootstrap --config config.yaml localhost lxd
Once this has run, open another terminal and run:
$ watch -n 5 juju status
This will allow you to see what is happening in the system as it deploys.
Deploy the Manila bundle
In the GitHub.com archive of tools for this post, there are several useful scripts and files to help with exploring the charmed version of manila.
In the archive:
config.yaml
lxd-profile.yaml
README.md
/scripts
The config.yaml
file is used to bootstrap the Juju 2.0 controller on LXD. From the OpenStack on LXD resource, first ensure that both LXD and Juju are installed, and that LXD is configured to use ZFS as the back end, and then:
$ juju bootstrap --config config.yaml localhost lxd
To bootstrap a Juju controller into which we can place our Manila test bundle.
Now you can either do a fully parallel deploy, or a more laptop friendly, but slower, scripted sequential deploy:
Normal, parallel, deploy
Deploy the Manila bundle by going into the scripts
sub-directory:
$ cd scripts
$ juju deploy manila-juju-2.0.yaml
(Now go and grab lunch if you have a slow machine, or coffee if you have a multi-cored wonder ...)
Scripted, slower, sequential deploy
$ cd scripts
$ juju deploy manila-juju-2.0-no-units.yaml
$ ./01-deploy-slowly.sh
$ juju add-unit ceph -n 2
This will then deploy all of the charm software to Juju, but then, the script only adds a unit for each Juju app when the deployment is idle. This stops Juju trying to do too many things at once on a consumer grade CPU.
The final juju
command is to add a further two Ceph units so that Ceph will form its own cluster. Wait for the Juju status output to show that all the units show Unit is ready, or Unit is ready and clustered for the ceph
units.
After a suitable length of time, all of the units should have settled. When the (update-status)
hook runs, the unit's Agent
state will be executing
, but otherwise the units should indicate that they are ready with the except of the manila units which are still awaiting some configuration.
Use juju status
command that you ran earlier in the other terminal will show the status of the units.
Configuring the OpenStack networking
We need to configure two aspects of the networking:
- The external network that the OpenStack system lives in
- The private network for the tenant that is being configured.
The vars.sh
file contains environment variables that are used to configure the networking in the mini OpenStack system that has been created:
#/bin/bash
# Set up vars common to all the configuration scripts
export OVERCLOUD="./novarc"
# Set network defaults, if not already set.
[[ -z "$GATEWAY" ]] && export GATEWAY="172.16.1.1"
[[ -z "$CIDR_EXT" ]] && export CIDR_EXT="172.16.1.0/24"
[[ -z "$FIP_RANGE" ]] && export FIP_RANGE="172.16.1.201:172.16.1.254"
[[ -z "$NAMESERVER" ]] && export NAMESERVER="172.16.1.1"
[[ -z "$CIDR_PRIV" ]] && export CIDR_PRIV="192.168.21.0/24"
You need to change these to match your system:
GATEWAY
needs to match the LXD network gateway.CIDR_EXT
needs to match the network that was setup with LXDFIP_RANGE
. This is the floating IP range that is used to assign public addresses to a server so that it can be reached from outside the private network. This is at the top end of the CIDR that was allocated to LXD.NAMESERVER
is the name server in external network, which will be the same as theGATEWAY
.CIDR_PRIV
is the network that is assigned to the tenant network -- i.e. it is private within the OpenStack system.
The CIDR_EXT
network spans both the LXC network, 172.16.1.1-200 and the OpenStack ext_net which will be from 201-254.
The novarc
file (in the same directory) configures the environment variables that the OpenStack client(s) use to connect to the running instance. You should need to change this:
_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ')
for param in $_OS_PARAMS; do
unset $param
done
unset _OS_PARAMS
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_ID=Default
export OS_PROJECT_NAME=admin
export OS_PASSWORD=openstack
keystone_ip=`juju status keystone --format=oneline | cut -f 3 -d " " | tail -n 1 | tr -d '\n'`
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${keystone_ip}:5000
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PROJECT_DOMAIN_NAME=Default
So, with those variables set up, you can either take the easy route, and run:
$ ./02-configure-networking.sh
Or do the manual steps (in the scripts subdirectory):
$ source novarc
$ ./neutron-ext-net.py --network-type=flat \
-g 172.16.1.1 \
-c 172.16.1.0/24 \
-f 172.16.1.200:172.16.1.254 ext_net
$ ./neutron-tenant-net.py -t admin -r provider-router \
-N 172.16.1.1 internal 192.168.21.0/24
We also need to ensure we can reach those machines. The easiest way to do that (in testing!) is simply to open up the security groups. There's an included script that does that:
$ ./sec_groups.sh
Note: you might need to run ./sec_groups.sh
again (don't worry about the errors, if you do) after the instances have been created in the next step. You shouldn't need to, but for some reason, I did. (Please add your thoughts to the comments, if you know what might be going on!)
Configure OpenStack
Now that the applications have been installed and the networking configured (via the charms), it is now necessary to configure enough of OpenStack so that manila can create a share.
The 03-configure-openstack.sh
script does all of the work. It's a little complex but essentially, what it does is download and install various images that are needed to launch test machines and the manila generic NFS share instance.
It also checks for and configures the machine flavours to be able to launch instances in nova-compute.
Finally, it assigns floating IPs to the two instances that it creates.
The other thing the script does is that it checks to see if the actions it intends to do have already been done and skips them if they have already been configured.
The manual steps would look something like this (I've omitted the bash '$' for brevity):
# load the common vars
source ./vars.sh
## now everything is with the OVERCLOUD
source $OVERCLOUD
IMAGES=../images
mkdir -p "$IMAGES"
# fetch xenial
wget -O $IMAGES/xenial-server-cloudimg-amd64-disk1.img \
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.imgglance --os-image-api-version 1 image-create --name="xenial" \
--is-public=true --progress --container-format=bare \
--disk-format=qcow2 < $IMAGES/xenial-server-cloudimg-amd64-disk1.img
# fetch the manila-service-image
wget -O $IMAGES/manila-service-image-master.qcow2 \
http://tarballs.openstack.org/manila-image-elements/images/manila-service-image-master.qcow2glance --os-image-api-version 1 image-create --name="manila-service-image" \
--is-public=true --progress --container-format=bare \
--disk-format=qcow2 < $IMAGES/manila-service-image-master.qcow2
## Set up the flavors for the cirros image and the manila-service-image
openstack flavor create manila-service-flavor --id 100 --ram 256 --disk 0 --vcpus 1
openstack flavor create m1.xenial --id 7 --ram 2048 --disk 5 --vcpus 1
# Create demo/testing users, tenants and flavor
openstack project create --or-show demo
openstack user create --or-show --project demo --password pass \
--email demo@dev.null demo
openstack role create --or-show Member
openstack role add --user demo --project demo Member
# ensure that a keypair is setup for the user
openstack keypair create demo-user > ./demo-user-rsa
chmod 600 ./demo-user-rsa
## need the network id for 'internal' network to put into these images if they
# don't exist.
net_id=$( openstack network list -c ID -c Name -f value | grep internal | awk '{print $1}' | tr -d '\n')
# create two test vms for share testing
openstack server create --flavor m1.xenial --image xenial --key-name demo-user \
--security-group default --nic net-id=$net_id xenial-test1
openstack server create --flavor m1.xenial --image xenial --key-name demo-user \
--security-group default --nic net-id=$net_id xenial-test2
Now we need to assign the floating IPs to the two Ubuntu instances that were created.
$ openstack ip floating create ext_net
$ openstack ip floating create ext_net
$ openstack server list
$ openstack ip floating list
And then assign them to the two servers
$ openstack ip floating add <ip_address1> xenial-test1
$ openstack ip floating add <ip_address1> xenial-test2
Configure Manila
Now the manila service needs to be configured within OpenStack. There's two parts to this:
- Creating the share type and a share network that will be used to put the share into.
- Configuring the generic driver charm with this information so that when manila is used to create a share in the share network that was configured in step 1, that manila uses the generic share driver. Seems a little odd, but this makes it possible to define many drivers, and plug those into different manila share networks and then when a share is created in that share network it then uses the driver that was configured for it.
The easy way is to just use the 04-configure-manila.sh
script. However, it's better to learn by doing, so:
Firstly, create a share type:
$ manila type-create default_share_type true
This creates a share type (which is just really a label that is used to distinguish between different types) and whether this share type also specifies that any associated drivers will handle the share servers. Handle, in this context, means create, delete, etc.
Next, create the share network, which ties into the internal network that was created for the two instances. i.e. this share network will be to create and manage shares for that network.
$ openstack network list -c ID -c Name
+--------------------------------------+----------+
| ID | Name |
+--------------------------------------+----------+
| 4a5da792-08a5-40b7-96c3-562022173942 | ext_net |
| 352eae12-aefc-40d5-a104-1c06259a77ff | internal |
+--------------------------------------+----------+
$ openstack network list -c Subnets -c Name
+----------+--------------------------------------+
| Name | Subnets |
+----------+--------------------------------------+
| ext_net | 1cf91607-d24d-4e44-8ccd-9b00cba624d1 |
| internal | 5df2f528-bdf0-4dd3-a577-4948b4f38017 |
+----------+--------------------------------------+
I've split the openstack network list
into two commands so that the output would fit comfortably on this post, but you can combine them into a single command.
Then, using the internal ID, create the share network:
$ manila share-network-create --name test_share_network \
--neutron-net-id <ID value> \
--neutron-subnet-id <Subnets value>
Finally we need to configure the manila_generic charm what the NFS instance server is, the flavor to create it in, the password and the auth type. For the standard manila NFS instance for the generic driver this is just a manila:manila username:password setting.
Find the flavor id for the manila-service-flavor
we created earlier.
$ openstack flavor list -c ID -c Name | grep manila
| 100 | manila-service-flavor |
And then configure the manila-generic
charm:
$ juju config manila-generic \
<flavor-id> \
driver-service-instance-flavor-id=\
driver-handles-share-servers=true \
driver-service-instance-password=manila driver-auth-type=password
Finally, configure the manila
charm with the generic driver as the default:
$ juju config manila default-share-backend=generic
So that's the small manila OpenStack system set up to create a share. So let's do that:
A comment on back end drivers
So in this post we used the Manila Generic driver, but this isn't really for production use as the NFS server instance represents a single point of failure for a share. However, it does provide a useful reference driver and can be used experimentally in the lab, and in development, to test bringing applications that need access to shared file directories to an OpenStack cluster. i.e. it's great for testing virtualisation of those existing applications in your system.
Manila provides many, many other drivers. In order to use them with the Juju Manila charm, the back end driver would need to be charmed up, which is to say, have a charm written to support the configuration for that driver. This is how we model software in Juju, and is comparable (but from a Juju perspective, more powerful) than the equivalent concept in say Puppet or Chef.
Juju charms are fairly easy to write, and I'm planning on doing a post to show how to write a manila plug-in charm to support another driver. Let me know in the comments, or contact me on twitter/IRC, which ones you are interested in.
Final Thoughts
Manila is actually fairly easy to work with. If it seems complex, then it's probably just that that OpenStack itself is a complex set of software. Juju, however, really helps with modelling a set of interacting software applications.
I work with Juju everyday, but having also worked with Ansible and Puppet in the past, Juju really works well with modelling complex sets of interacting software, rather than just pushing out configuration and doing installation.
If you have any thoughts or suggestions, then please leave comments below, or you can contact me on #openstack-charms on Freenode or @ajkavanagh on Twitter.
Resources
These are the resources I used to work out how to do this testing, and also the GitHub repository with the example code. I hope they are useful to you when you're exploring ZFS, LXD, Juju and OpenStack!
Comments are disabled for this page.