1
0

saltify.rst 8.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262
  1. .. _getting-started-with-saltify:
  2. ============================
  3. Getting Started With Saltify
  4. ============================
  5. The Saltify driver is a driver for installing Salt on existing
  6. machines (virtual or bare metal).
  7. Dependencies
  8. ============
  9. The Saltify driver has no external dependencies.
  10. Configuration
  11. =============
  12. Because the Saltify driver does not use an actual cloud provider host, it can have a
  13. simple provider configuration. The only thing that is required to be set is the
  14. driver name, and any other potentially useful information, like the location of
  15. the salt-master:
  16. .. code-block:: yaml
  17. # Note: This example is for /etc/salt/cloud.providers file or any file in
  18. # the /etc/salt/cloud.providers.d/ directory.
  19. my-saltify-config:
  20. minion:
  21. master: 111.222.333.444
  22. driver: saltify
  23. However, if you wish to use the more advanced capabilities of salt-cloud, such as
  24. rebooting, listing, and disconnecting machines, then the salt master must fill
  25. the role usually performed by a vendor's cloud management system. The salt master
  26. must be running on the salt-cloud machine, and created nodes must be connected to the
  27. master.
  28. Additional information about which configuration options apply to which actions
  29. can be studied in the
  30. :ref:`Saltify Module documentation <saltify-module>`
  31. and the
  32. :ref:`Miscellaneous Salt Cloud Options <misc-salt-cloud-options>`
  33. document.
  34. Profiles
  35. ========
  36. Saltify requires a separate profile to be configured for each machine that
  37. needs Salt installed [#]_. The initial profile can be set up at
  38. ``/etc/salt/cloud.profiles``
  39. or in the ``/etc/salt/cloud.profiles.d/`` directory. Each profile requires
  40. both an ``ssh_host`` and an ``ssh_username`` key parameter as well as either
  41. an ``key_filename`` or a ``password``.
  42. .. [#] Unless you are using a map file to provide the unique parameters.
  43. Profile configuration example:
  44. .. code-block:: yaml
  45. # /etc/salt/cloud.profiles.d/saltify.conf
  46. salt-this-machine:
  47. ssh_host: 12.34.56.78
  48. ssh_username: root
  49. key_filename: '/etc/salt/mysshkey.pem'
  50. provider: my-saltify-config
  51. The machine can now be "Salted" with the following command:
  52. .. code-block:: bash
  53. salt-cloud -p salt-this-machine my-machine
  54. This will install salt on the machine specified by the cloud profile,
  55. ``salt-this-machine``, and will give the machine the minion id of
  56. ``my-machine``. If the command was executed on the salt-master, its Salt
  57. key will automatically be accepted by the master.
  58. Once a salt-minion has been successfully installed on the instance, connectivity
  59. to it can be verified with Salt:
  60. .. code-block:: bash
  61. salt my-machine test.version
  62. Destroy Options
  63. ---------------
  64. .. versionadded:: 2018.3.0
  65. For obvious reasons, the ``destroy`` action does not actually vaporize hardware.
  66. If the salt master is connected, it can tear down parts of the client machines.
  67. It will remove the client's key from the salt master,
  68. and can execute the following options:
  69. .. code-block:: yaml
  70. - remove_config_on_destroy: true
  71. # default: true
  72. # Deactivate salt-minion on reboot and
  73. # delete the minion config and key files from its "/etc/salt" directory,
  74. # NOTE: If deactivation was unsuccessful (older Ubuntu machines) then when
  75. # salt-minion restarts it will automatically create a new, unwanted, set
  76. # of key files. Use the "force_minion_config" option to replace them.
  77. - shutdown_on_destroy: false
  78. # default: false
  79. # last of all, send a "shutdown" command to the client.
  80. Wake On LAN
  81. -----------
  82. .. versionadded:: 2018.3.0
  83. In addition to connecting a hardware machine to a Salt master,
  84. you have the option of sending a wake-on-LAN
  85. `magic packet`_
  86. to start that machine running.
  87. .. _magic packet: https://en.wikipedia.org/wiki/Wake-on-LAN
  88. The "magic packet" must be sent by an existing salt minion which is on
  89. the same network segment as the target machine. (Or your router
  90. must be set up especially to route WoL packets.) Your target machine
  91. must be set up to listen for WoL and to respond appropriately.
  92. You must provide the Salt node id of the machine which will send
  93. the WoL packet \(parameter ``wol_sender_node``\), and
  94. the hardware MAC address of the machine you intend to wake,
  95. \(parameter ``wake_on_lan_mac``\). If both parameters are defined,
  96. the WoL will be sent. The cloud master will then sleep a while
  97. \(parameter ``wol_boot_wait``) to give the target machine time to
  98. boot up before we start probing its SSH port to begin deploying
  99. Salt to it. The default sleep time is 30 seconds.
  100. .. code-block:: yaml
  101. # /etc/salt/cloud.profiles.d/saltify.conf
  102. salt-this-machine:
  103. ssh_host: 12.34.56.78
  104. ssh_username: root
  105. key_filename: '/etc/salt/mysshkey.pem'
  106. provider: my-saltify-config
  107. wake_on_lan_mac: '00:e0:4c:70:2a:b2' # found with ifconfig
  108. wol_sender_node: bevymaster # its on this network segment
  109. wol_boot_wait: 45 # seconds to sleep
  110. Using Map Files
  111. ---------------
  112. The settings explained in the section above may also be set in a map file. An
  113. example of how to use the Saltify driver with a map file follows:
  114. .. code-block:: yaml
  115. # /etc/salt/saltify-map
  116. make_salty:
  117. - my-instance-0:
  118. ssh_host: 12.34.56.78
  119. ssh_username: root
  120. password: very-bad-password
  121. - my-instance-1:
  122. ssh_host: 44.33.22.11
  123. ssh_username: root
  124. password: another-bad-pass
  125. In this example, the names ``my-instance-0`` and ``my-instance-1`` will be the
  126. identifiers of the deployed minions.
  127. Note: The ``ssh_host`` directive is also used for Windows hosts, even though they do
  128. not typically run the SSH service. It indicates IP address or host name for the target
  129. system.
  130. Note: When using a cloud map with the Saltify driver, the name of the profile
  131. to use, in this case ``make_salty``, must be defined in a profile config. For
  132. example:
  133. .. code-block:: yaml
  134. # /etc/salt/cloud.profiles.d/saltify.conf
  135. make_salty:
  136. provider: my-saltify-config
  137. The machines listed in the map file can now be "Salted" by applying the
  138. following salt map command:
  139. .. code-block:: bash
  140. salt-cloud -m /etc/salt/saltify-map
  141. This command will install salt on the machines specified in the map and will
  142. give each machine their minion id of ``my-instance-0`` and ``my-instance-1``,
  143. respectively. If the command was executed on the salt-master, its Salt key will
  144. automatically be signed on the master.
  145. Connectivity to the new "Salted" instances can now be verified with Salt:
  146. .. code-block:: bash
  147. salt 'my-instance-*' test.version
  148. Bulk Deployments
  149. ----------------
  150. When deploying large numbers of Salt Minions using Saltify, it may be
  151. preferable to organize the configuration in a way that duplicates data
  152. as little as possible. For example, if a group of target systems have
  153. the same credentials, they can be specified in the profile, rather than
  154. in a map file.
  155. .. code-block:: yaml
  156. # /etc/salt/cloud.profiles.d/saltify.conf
  157. make_salty:
  158. provider: my-saltify-config
  159. ssh_username: root
  160. password: very-bad-password
  161. .. code-block:: yaml
  162. # /etc/salt/saltify-map
  163. make_salty:
  164. - my-instance-0:
  165. ssh_host: 12.34.56.78
  166. - my-instance-1:
  167. ssh_host: 44.33.22.11
  168. If ``ssh_host`` is not provided, its default value will be the Minion identifier
  169. (``my-instance-0`` and ``my-instance-1``, in the example above). For deployments with
  170. working DNS resolution, this can save a lot of redundant data in the map. Here is an
  171. example map file using DNS names instead of IP addresses:
  172. .. code-block:: yaml
  173. # /etc/salt/saltify-map
  174. make_salty:
  175. - my-instance-0
  176. - my-instance-1
  177. Credential Verification
  178. =======================
  179. Because the Saltify driver does not actually create VM's, unlike other
  180. salt-cloud drivers, it has special behaviour when the ``deploy`` option is set
  181. to ``False``. When the cloud configuration specifies ``deploy: False``, the
  182. Saltify driver will attempt to authenticate to the target node(s) and return
  183. ``True`` for each one that succeeds. This can be useful to verify ports,
  184. protocols, services and credentials are correctly configured before a live
  185. deployment.
  186. Return values:
  187. - ``True``: Credential verification succeeded
  188. - ``False``: Credential verification succeeded
  189. - ``None``: Credential verification was not attempted.