1
0

opennebula.rst 9.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259
  1. ===============================
  2. Getting Started with OpenNebula
  3. ===============================
  4. OpenNebula is an open-source solution for the comprehensive management of virtualized data centers to enable the mixed
  5. use of private, public, and hybrid IaaS clouds.
  6. Dependencies
  7. ============
  8. The driver requires Python's ``lxml`` library to be installed. It also requires an OpenNebula installation running
  9. version ``4.12`` or greater.
  10. Configuration
  11. =============
  12. The following example illustrates some of the options that can be set. These parameters are discussed in more detail
  13. below.
  14. .. code-block:: yaml
  15. # Note: This example is for /etc/salt/cloud.providers or any file in the
  16. # /etc/salt/cloud.providers.d/ directory.
  17. my-opennebula-provider:
  18. # Set up the location of the salt master
  19. #
  20. minion:
  21. master: saltmaster.example.com
  22. # Define xml_rpc setting which Salt-Cloud uses to connect to the OpenNebula API. Required.
  23. #
  24. xml_rpc: http://localhost:2633/RPC2
  25. # Define the OpenNebula access credentials. This can be the main "oneadmin" user that OpenNebula uses as the
  26. # OpenNebula main admin, or it can be a user defined in the OpenNebula instance. Required.
  27. #
  28. user: oneadmin
  29. password: JHGhgsayu32jsa
  30. # Define the private key location that is used by OpenNebula to access new VMs. This setting is required if
  31. # provisioning new VMs or accessing VMs previously created with the associated public key.
  32. #
  33. private_key: /path/to/private/key
  34. driver: opennebula
  35. Access Credentials
  36. ==================
  37. The Salt Cloud driver for OpenNebula was written using OpenNebula's native XML RPC API. Every interaction with
  38. OpenNebula's API requires a ``username`` and ``password`` to make the connection from the machine running Salt Cloud
  39. to API running on the OpenNebula instance. Based on the access credentials passed in, OpenNebula filters the commands
  40. that the user can perform or the information for which the user can query. For example, the images that a user can
  41. view with a ``--list-images`` command are the images that the connected user and the connected user's groups can access.
  42. Key Pairs
  43. =========
  44. Salt Cloud needs to be able to access a virtual machine in order to install the Salt Minion by using a public/private
  45. key pair. The virtual machine will need to be seeded with the public key, which is laid down by the OpenNebula
  46. template. Salt Cloud then uses the corresponding private key, provided by the ``private_key`` setting in the cloud
  47. provider file, to SSH into the new virtual machine.
  48. To seed the virtual machine with the public key, the public key must be added to the OpenNebula template. If using the
  49. OpenNebula web interface, navigate to the template, then click ``Update``. Click the ``Context`` tab. Under the
  50. ``Network & SSH`` section, click ``Add SSH Contextualization`` and paste the public key in the ``Public Key`` box.
  51. Don't forget to save your changes by clicking the green ``Update`` button.
  52. .. note::
  53. The key pair must not have a pass-phrase.
  54. Cloud Profiles
  55. ==============
  56. Set up an initial profile at either ``/etc/salt/cloud.profiles`` or the ``/etc/salt/cloud.profiles.d/`` directory.
  57. .. code-block:: yaml
  58. my-opennebula-profile:
  59. provider: my-opennebula-provider
  60. image: Ubuntu-14.04
  61. The profile can now be realized with a salt command:
  62. .. code-block:: bash
  63. salt-cloud -p my-opennebula-profile my-new-vm
  64. This will create a new instance named ``my-new-vm`` in OpenNebula. The minion that is installed on this instance will
  65. have a minion id of ``my-new-vm``. If the command was executed on the salt-master, its Salt key will automatically be
  66. signed on the master.
  67. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
  68. .. code-block:: bash
  69. salt my-new-vm test.version
  70. OpenNebula uses an image --> template --> virtual machine paradigm where the template draws on the image, or disk,
  71. and virtual machines are created from templates. Because of this, there is no need to define a ``size`` in the cloud
  72. profile. The size of the virtual machine is defined in the template.
  73. Change Disk Size
  74. ================
  75. You can now change the size of a VM on creation by cloning an image and expanding the size. You can accomplish this by
  76. the following cloud profile settings below.
  77. .. code-block:: yaml
  78. my-opennebula-profile:
  79. provider: my-opennebula-provider
  80. image: Ubuntu-14.04
  81. disk:
  82. disk0:
  83. disk_type: clone
  84. size: 8096
  85. image: centos7-base-image-v2
  86. disk1:
  87. disk_type: volatile
  88. type: swap
  89. size: 4096
  90. disk2:
  91. disk_type: volatile
  92. size: 4096
  93. type: fs
  94. format: ext3
  95. There are currently two different disk_types a user can use: volatile and clone. Clone which is required when specifying devices
  96. will clone an image in open nebula and will expand it to the size specified in the profile settings. By default this will clone
  97. the image attached to the template specified in the profile but a user can add the `image` argument under the disk definition.
  98. For example the profile below will not use Ubuntu-14.04 for the cloned disk image. It will use the centos7-base-image image:
  99. .. code-block:: yaml
  100. my-opennebula-profile:
  101. provider: my-opennebula-provider
  102. image: Ubuntu-14.04
  103. disk:
  104. disk0:
  105. disk_type: clone
  106. size: 8096
  107. image: centos7-base-image
  108. If you want to use the image attached to the template set in the profile you can simply remove the image argument as show below.
  109. The profile below will clone the image Ubuntu-14.04 and expand the disk to 8GB.:
  110. .. code-block:: yaml
  111. my-opennebula-profile:
  112. provider: my-opennebula-provider
  113. image: Ubuntu-14.04
  114. disk:
  115. disk0:
  116. disk_type: clone
  117. size: 8096
  118. A user can also currently specify swap or fs disks. Below is an example of this profile setting:
  119. .. code-block:: yaml
  120. my-opennebula-profile:
  121. provider: my-opennebula-provider
  122. image: Ubuntu-14.04
  123. disk:
  124. disk0:
  125. disk_type: clone
  126. size: 8096
  127. disk1:
  128. disk_type: volatile
  129. type: swap
  130. size: 4096
  131. disk2:
  132. disk_type: volatile
  133. size: 4096
  134. type: fs
  135. format: ext3
  136. The example above will attach both a swap disk and a ext3 filesystem with a size of 4GB. To note if you define other disks you have
  137. to define the image disk to clone because the template will write over the entire 'DISK=[]' template definition on creation.
  138. .. _opennebula-required-settings:
  139. Required Settings
  140. =================
  141. The following settings are always required for OpenNebula:
  142. .. code-block:: yaml
  143. my-opennebula-config:
  144. xml_rpc: http://localhost:26633/RPC2
  145. user: oneadmin
  146. password: JHGhgsayu32jsa
  147. driver: opennebula
  148. Required Settings for VM Deployment
  149. -----------------------------------
  150. The settings defined in the :ref:`opennebula-required-settings` section are required for all interactions with
  151. OpenNebula. However, when deploying a virtual machine via Salt Cloud, an additional setting, ``private_key``, is also
  152. required:
  153. .. code-block:: yaml
  154. my-opennebula-config:
  155. private_key: /path/to/private/key
  156. Listing Images
  157. ==============
  158. Images can be queried on OpenNebula by passing the ``--list-images`` argument to Salt Cloud:
  159. .. code-block:: bash
  160. salt-cloud --list-images opennebula
  161. Listing Locations
  162. =================
  163. In OpenNebula, locations are defined as ``hosts``. Locations, or "hosts", can be querried on OpenNebula by passing the
  164. ``--list-locations`` argument to Salt Cloud:
  165. .. code-block:: bash
  166. salt-cloud --list-locations opennebula
  167. Listing Sizes
  168. =============
  169. Sizes are defined by templates in OpenNebula. As such, the ``--list-sizes`` call returns an empty dictionary since
  170. there are no sizes to return.
  171. Additional OpenNebula API Functionality
  172. =======================================
  173. The Salt Cloud driver for OpenNebula was written using OpenNebula's native XML RPC API. As such, many ``--function``
  174. and ``--action`` calls were added to the OpenNebula driver to enhance support for an OpenNebula infrastructure with
  175. additional control from Salt Cloud. See the :py:mod:`OpenNebula function definitions <salt.cloud.clouds.opennebula>`
  176. for more information.
  177. Access via DNS entry instead of IP
  178. ==================================
  179. Some OpenNebula installations do not assign IP addresses to new VMs, instead they establish the new VM's hostname based
  180. on OpenNebula's name of the VM, and then allocate an IP out of DHCP with dynamic DNS attaching the hostname. This driver
  181. supports this behavior by adding the entry `fqdn_base` to the driver configuration or the OpenNebula profile with a value
  182. matching the base fully-qualified domain. For example:
  183. .. code-block:: yaml
  184. # Note: This example is for /etc/salt/cloud.providers or any file in the
  185. # /etc/salt/cloud.providers.d/ directory.
  186. my-opennebula-provider:
  187. [...]
  188. fqdn_base: corp.example.com
  189. [...]