Installing OpenStack on Centos 6


This section describes a basic installation of OpenStack on Centos 6. ORCA requires the NEuca patch described in the following section.

1. Install Centos 6

2. Disable SELinux: Replace "enforcing" with "disabled" in /etc/sysconfig/selinux. Reboot

3. Get yum repos:

EPEL repo

sudo yum install

OpenStack repo (from Grid Dynamics)

sudo yum install

4. We want to use the "diablo" directory not "diablo-3". Edit /etc/yum.repos.d/openstack.repo to look like the following:


name=OpenStack Dependencies

5. Install OpenStack rpm

Cloud Controller:

sudo yum install euca2ools openstack-nova-node-full mysql-server

Compute Nodes:

sudo yum install openstack-nova-node-compute 

6. Setup suport services: (compute nodes only require libvirtd)

sudo chkconfig libvirtd on
sudo service libvirtd start
sudo service mysqld start
sudo chkconfig mysqld on
sudo service rabbitmq-server start
sudo chkconfig rabbitmq-server on

7. Create MySQL database on Cloud Controller

Set password for mysql

mysqladmin -uroot password nova

Script to setup database for OpenStack



CC_HOST="A.B.C.D" # IPv4 address
HOSTS='node1 node2 node3' # compute nodes list

mysqladmin -uroot -p$PWD -f drop nova
mysqladmin -uroot -p$PWD create nova

for h in $HOSTS localhost; do
        echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO '$DB_USER'@'$h' IDENTIFIED BY '$DB_PASS';" | mysql -uroot -p$DB_PASS mysql
echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO root IDENTIFIED BY '$DB_PASS';" | mysql -uroot -p$DB_PASS mysql

nova-manage db sync

8. Configure firewall

Cloud Controller

sudo iptables -I INPUT 1 -p tcp --dport 5672 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 3306 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 9292 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 6080 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 8773 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 8774 -j ACCEPT
sudo iptables -I INPUT 1 -p udp --dport 67 -j ACCEPT

All Compute Nodes

sudo iptables -I INPUT 1 -p tcp -s <CLOUD_CONTROLLER_IP_ADDRESS> --dport 5900:6400 -j ACCEPT

9. Configure /etc/nova/nova.conf. One example using a single NIC. Requires a vlan (by default vlan tag 100). Replace CLOUD_CONTROLLER_IP with your cloud contorller's IP.

Make sure nova.conf is owned by user "nova"

--logging_context_format_string=%(asctime)s %(name)s: %(levelname)s [%(request_id)s %(user)s %(project)s] %(message)s
--logging_default_format_string=%(asctime)s %(name)s: %(message)s
## Networking

10. Start your OpenStack services

for n in api compute network objectstore scheduler vncproxy; do sudo service openstack-nova-$n start; done
sudo service openstack-glance-api start
sudo service openstack-glance-registry start
for n in node1 node2 node3; do ssh $n sudo service openstack-nova-compute start; done

11. Create public/private networks for vms. You must have a vlan tag enabled on your switch for this to work. Public network is bridged to vlan100 by default.

nova-manage --flagfile=/etc/nova/nova.conf network create private 1 256
nova-manage --flagfile=/etc/nova/nova.conf floating create 

12. Create user and project (both user and project called "admin")

nova-manage --flagfile=/etc/nova/nova.conf user admin admin
nova-manage --flagfile=/etc/nova/nova.conf project create admin admin

13. Get the credential files and source the novarc file

nova-manage --flagfile=/etc/nova/nova.conf project zipfile admin admin
source novarc

You should now be able to use the ec2 commands. Try:


14. Allow ping/ssh access

euca-authorize -P icmp -t -1:-1 default
euca-authorize -P tcp -p 22 default

15. Something is broken about the dhcp server that OpenStack deploys. Kill all dnsmasq servers and restart the nova-network service. If you don't do this you will not be able to access your vms.

sudo killall dnsmasq
sudo service openstack-nova-network restart

Starting a VM

1. Get a working image with kernal and initrd.

Simple example image:

tar -xvf ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz

2. Bundle, upload, and register the kernel, initrd, and image

euca-bundle-image -i ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz --kernel true
euca-upload-bundle -b kernel-bucket -m /tmp/ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz.manifest.xml
euca-register kernel-bucket/ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz.manifest.xml

euca-bundle-image -i ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd --ramdisk true
euca-upload-bundle -b ramdisk-bucket -m /tmp/ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd.manifest.xml
euca-register ramdisk-bucket/ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd.manifest.xml

euca-bundle-image -i ttylinux-uec-amd64-12.1_2.6.35-22_1.img --kernel <aki name> --ramdisk <ari name>
euca-upload-bundle -b image-bucket -m /tmp/ttylinux-uec-amd64-12.1_2.6.35-22_1.img.manifest.xml
euca-register image-bucket/ttylinux-uec-amd64-12.1_2.6.35-22_1.img.manifest.xml



3. Add a key

euca-add-keypair mykey > mykey.pem

4. Run the vm

Replace IMAGE_ID with your image id (probably i-00000003)

euca-run-instances -k mykey IMAGE_ID

5. You should have a running instance

Try (replace VMs_IP with your vm's IP)

ping VMs_IP
ssh -i mykey.pem root@VMs_IP

Adding NEuca to OpenStack

These instructions do not assume that you have installed OpenStack from the GridDynamics? repo.

If you did install from the GridDynamics? repo you can get the source rpms from the repo. Once you install the source rpms, the OpenStack source will be in $HOME/rpmbuild/SOURCES/nova-2011.3.tar.gz

If you installed OpenStack in any other way you will have to find the source code. It will be in a directory called "nova-2011.3".

To add NEuca, get the patch from the link at the bottom of this page, and patch the "nova-2011.3" source directory.

cd /place/where/the/source/dir/is/located
patch -p0 < openstack.neuca-0.1.patch

Re-build the source, install, and restart nova services.

The patch is known to work with nova-2011.3 but may work with other versions with minor modifications.

References (a lot of instructions borrowed from here)