Changes between Version 36 and Version 37 of Eucalyptus-1.6.2-Setup

Show
Ignore:
Timestamp:
06/15/10 15:20:58 (9 years ago)
Author:
shuang (IP: 152.54.8.247)
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Eucalyptus-1.6.2-Setup

    v36 v37  
    107107}}} 
    108108 
     109Note: the eucalptus.conf says that CN's VNET_PUBINTERFACE and VNET_PRIVINTERFACE can be the name of either the bridge (br0 in our case) or the physical interface, however, using the bridge name does not work for me.  
    109110=== Configuring Eucalyptus === 
    110111 1. Euca components. Assume Euca is installed /opt/eucalyptus,  
     
    223224 
    224225 
    225  
    226  
    227 === How Eucalyptus allocates VLAN tags === 
    228  
    229 Eucalyptus uses vlans as an isolation mechanism between security groups. When a security group is created (euca-add-group) and then instances (VMs) are created within the group, Euca allocates a VLAN for this group, creates bridges in individual hosts (head and worker/client nodes) and attaches VMs to those bridges, instead of the default bridge. VLAN allocation in Eucalyptus is simple. In the configuration file there is a VNET_NETMASK parameter that indicates to it the width of the mask for address assignment, and, consequently, the total number of VMs that can be created (let's call this MAXHOSTS = 2^(32-MASK_WIDTH)^-2 ). The VNET_ADDRSPERNET parameter dictates the maximum number of VMs per security group/VLAN. Therefore the total number of VLANs that the system will use will be MAXHOSTS/VNET_ADDRSPERNET and the maximum VLAN tag Eucalyptus will use appears to be MAXHOSTS/VNET_ADDRSPERNET -1. For example for a 24 bit mask, MAXHOSTS=254 and for VNET_ADDRSPERNET=16, the maximum VLAN tag Eucalyptus will use will be 15. Default VLAN tag is 10 (for a default security group). 
    230  
    231 === Testing Eucalyptus === 
    232  
    233  1. [HN as root]  Download and install [http://open.eucalyptus.com/downloads euca2ools 1.0] 
    234  1. Register a user through the head node portal and [http://open.eucalyptus.com/wiki/EucalyptusGettingStarted_v1.5.2 acquire credentials] 
    235  1. [HN as regular user] Login, install the credentials and try 
    236 {{{ 
    237 euca-describe-availability-zones verbose 
    238 }}} 
    239  1. [HN as root] If client machines aren't showing up, try restarting cloud controller and portal 
    240 {{{ 
    241 # /etc/init.d/eucalyptus-cc restart 
    242 # /etc/init.d/eucalyptus-cloud restart 
    243 }}} 
    244  1. [HN as regular user] Install a stock kernel, filesystem and ramdisk into walrus using these [http://open.eucalyptus.com/wiki/EucalyptusImageManagement_v1.5.2 instructions] 
    245 {{{ 
    246 $ tar -zxf euca-ubuntu-9.04-x86_64.tar.gz 
    247 $ cd euca-ubuntu-9.04-x86_64/kvm-kernel 
    248 $ euca-bundle-image -i vmlinuz-2.6.28-11-generic --kernel true 
    249 $ euca-upload-bundle -b kernels -m /tmp/vmlinuz-2.6.28-11-generic.manifest.xml 
    250 $ euca-register kernels/vmlinuz-2.6.28-11-generic.manifest.xml 
    251 $ euca-bundle-image -i initrd.img-2.6.28-11-generic --ramdisk true 
    252 $ euca-upload-bundle -b ramdisks -m /tmp/initrd.img-2.6.28-11-generic.manifest.xml 
    253 $ euca-register ramdisks/initrd.img-2.6.28-11-generic.manifest.xml 
    254 $ cd .. 
    255 $ euca-bundle-image -i ubuntu.9-04.x86-64.img 
    256 $ euca-upload-bundle -b images -m /tmp/ubuntu.9-04.x86-64.img.manifest.xml 
    257 $ euca-register images/ubuntu.9-04.x86-64.img.manifest.xml 
    258 $ euca-describe-images 
    259 }}} 
    260  1. Generate ssh credentials for logging into the VMs 
    261 {{{ 
    262 euca-add-keypair mykey >mykey.private 
    263 }}} 
    264  1. Attempt to create some vms 
    265 {{{ 
    266 euca-run-instances --addressing private -k mykey -n <number of instances to start> <emi-id>  
    267 }}} 
    268  
    269  
    270226=== Troubleshooting === 
    271227 
     
    273229{{{ 
    274230$ brctl show 
     231}}} 
     232If something wrong, this procedure can be done manually so you check the correctness of every step: 
     233{{{ 
     234$ sudo vconfig add eth0 10 
     235$ sudo ifconfig eth0.10 up 
     236$ sudo brctl addbr testbr10  
     237$ sudo brctl addif testbr10 eth0.10 
     238$ sudo ifconfig testbr10 10.0.0.2 up 
     239}}} 
     240Do the same on front-end and make testbr10 10.0.0.1, make sure it can ping 10.0.0.2. Then undo what we just did for both front-end and the node: 
     241{{{ 
     242$ sudo ifconfig testbr10 down 
     243$ sudo ifconfig eth0.10 down 
     244$ sudo brctl delbr testbr10 
     245$ sudo vonfig rem eth0.10 
    275246}}} 
    276247 1. Check the VMs are created in machines (may have to hunt for them since you don't know which specific compute node a VM will be created on):