misc.rst 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418
  1. .. _misc-salt-cloud-options:
  2. ================================
  3. Miscellaneous Salt Cloud Options
  4. ================================
  5. This page describes various miscellaneous options available in Salt Cloud
  6. Deploy Script Arguments
  7. =======================
  8. Custom deploy scripts are unlikely to need custom arguments to be passed to
  9. them, but salt-bootstrap has been extended quite a bit, and this may be
  10. necessary. script_args can be specified in either the profile or the map file,
  11. to pass arguments to the deploy script:
  12. .. code-block:: yaml
  13. ec2-amazon:
  14. provider: my-ec2-config
  15. image: ami-1624987f
  16. size: t1.micro
  17. ssh_username: ec2-user
  18. script: bootstrap-salt
  19. script_args: -c /tmp/
  20. This has also been tested to work with pipes, if needed:
  21. .. code-block:: yaml
  22. script_args: '| head'
  23. Selecting the File Transport
  24. ============================
  25. By default, Salt Cloud uses SFTP to transfer files to Linux hosts. However, if
  26. SFTP is not available, or specific SCP functionality is needed, Salt Cloud can
  27. be configured to use SCP instead.
  28. .. code-block:: yaml
  29. file_transport: sftp
  30. file_transport: scp
  31. Sync After Install
  32. ==================
  33. Salt allows users to create custom modules, grains, and states which can be
  34. synchronised to minions to extend Salt with further functionality.
  35. This option will inform Salt Cloud to synchronise your custom modules, grains,
  36. states or all these to the minion just after it has been created. For this to
  37. happen, the following line needs to be added to the main cloud
  38. configuration file:
  39. .. code-block:: yaml
  40. sync_after_install: all
  41. The available options for this setting are:
  42. .. code-block:: yaml
  43. modules
  44. grains
  45. states
  46. all
  47. Setting Up New Salt Masters
  48. ===========================
  49. It has become increasingly common for users to set up multi-hierarchal
  50. infrastructures using Salt Cloud. This sometimes involves setting up an
  51. instance to be a master in addition to a minion. With that in mind, you can
  52. now lay down master configuration on a machine by specifying master options
  53. in the profile or map file.
  54. .. code-block:: yaml
  55. make_master: True
  56. This will cause Salt Cloud to generate master keys for the instance, and tell
  57. salt-bootstrap to install the salt-master package, in addition to the
  58. salt-minion package.
  59. The default master configuration is usually appropriate for most users, and
  60. will not be changed unless specific master configuration has been added to the
  61. profile or map:
  62. .. code-block:: yaml
  63. master:
  64. user: root
  65. interface: 0.0.0.0
  66. Setting Up a Salt Syndic with Salt Cloud
  67. ========================================
  68. In addition to `setting up new Salt Masters`_, :ref:`syndics <syndic>` can also be
  69. provisioned using Salt Cloud. In order to set up a Salt Syndic via Salt Cloud,
  70. a Salt Master needs to be installed on the new machine and a master configuration
  71. file needs to be set up using the ``make_master`` setting. This setting can be
  72. defined either in a profile config file or in a map file:
  73. .. code-block:: yaml
  74. make_master: True
  75. To install the Salt Syndic, the only other specification that needs to be
  76. configured is the ``syndic_master`` key to specify the location of the master
  77. that the syndic will be reporting to. This modification needs to be placed
  78. in the ``master`` setting, which can be configured either in the profile,
  79. provider, or ``/etc/salt/cloud`` config file:
  80. .. code-block:: yaml
  81. master:
  82. syndic_master: 123.456.789 # may be either an IP address or a hostname
  83. Many other Salt Syndic configuration settings and specifications can be passed
  84. through to the new syndic machine via the ``master`` configuration setting.
  85. See the :ref:`syndic` documentation for more information.
  86. SSH Port
  87. ========
  88. By default ssh port is set to port 22. If you want to use a custom port in
  89. provider, profile, or map blocks use ssh_port option.
  90. .. versionadded:: 2015.5.0
  91. .. code-block:: yaml
  92. ssh_port: 2222
  93. Delete SSH Keys
  94. ===============
  95. When Salt Cloud deploys an instance, the SSH pub key for the instance is added
  96. to the known_hosts file for the user that ran the salt-cloud command. When an
  97. instance is deployed, a cloud host generally recycles the IP address for
  98. the instance. When Salt Cloud attempts to deploy an instance using a recycled
  99. IP address that has previously been accessed from the same machine, the old key
  100. in the known_hosts file will cause a conflict.
  101. In order to mitigate this issue, Salt Cloud can be configured to remove old
  102. keys from the known_hosts file when destroying the node. In order to do this,
  103. the following line needs to be added to the main cloud configuration file:
  104. .. code-block:: yaml
  105. delete_sshkeys: True
  106. Keeping /tmp/ Files
  107. ===================
  108. When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for
  109. salt-bootstrap to put in place. After the script has run, they are deleted. To
  110. keep these files around (mostly for debugging purposes), the --keep-tmp option
  111. can be added:
  112. .. code-block:: bash
  113. salt-cloud -p myprofile mymachine --keep-tmp
  114. For those wondering why /tmp/ was used instead of /root/, this had to be done
  115. for images which require the use of sudo, and therefore do not allow remote
  116. root logins, even for file transfers (which makes /root/ unavailable).
  117. Hide Output From Minion Install
  118. ===============================
  119. By default Salt Cloud will stream the output from the minion deploy script
  120. directly to STDOUT. Although this can been very useful, in certain cases you
  121. may wish to switch this off. The following config option is there to enable or
  122. disable this output:
  123. .. code-block:: yaml
  124. display_ssh_output: False
  125. Connection Timeout
  126. ==================
  127. There are several stages when deploying Salt where Salt Cloud needs to wait for
  128. something to happen. The VM getting it's IP address, the VM's SSH port is
  129. available, etc.
  130. If you find that the Salt Cloud defaults are not enough and your deployment
  131. fails because Salt Cloud did not wait log enough, there are some settings you
  132. can tweak.
  133. .. admonition:: Note
  134. All settings should be provided in lowercase
  135. All values should be provided in seconds
  136. You can tweak these settings globally, per cloud provider, or event per profile
  137. definition.
  138. wait_for_ip_timeout
  139. ~~~~~~~~~~~~~~~~~~~
  140. The amount of time Salt Cloud should wait for a VM to start and get an IP back
  141. from the cloud host.
  142. Default: varies by cloud provider ( between 5 and 25 minutes)
  143. wait_for_ip_interval
  144. ~~~~~~~~~~~~~~~~~~~~
  145. The amount of time Salt Cloud should sleep while querying for the VM's IP.
  146. Default: varies by cloud provider ( between .5 and 10 seconds)
  147. ssh_connect_timeout
  148. ~~~~~~~~~~~~~~~~~~~
  149. The amount of time Salt Cloud should wait for a successful SSH connection to
  150. the VM.
  151. Default: varies by cloud provider (between 5 and 15 minutes)
  152. wait_for_passwd_timeout
  153. ~~~~~~~~~~~~~~~~~~~~~~~
  154. The amount of time until an ssh connection can be established via password or
  155. ssh key.
  156. Default: varies by cloud provider (mostly 15 seconds)
  157. wait_for_passwd_maxtries
  158. ~~~~~~~~~~~~~~~~~~~~~~~~
  159. The number of attempts to connect to the VM until we abandon.
  160. Default: 15 attempts
  161. wait_for_fun_timeout
  162. ~~~~~~~~~~~~~~~~~~~~
  163. Some cloud drivers check for an available IP or a successful SSH connection
  164. using a function, namely, SoftLayer, and SoftLayer-HW. So, the amount of time
  165. Salt Cloud should retry such functions before failing.
  166. Default: 15 minutes.
  167. wait_for_spot_timeout
  168. ~~~~~~~~~~~~~~~~~~~~~
  169. The amount of time Salt Cloud should wait before an EC2 Spot instance is
  170. available. This setting is only available for the EC2 cloud driver.
  171. Default: 10 minutes
  172. Salt Cloud Cache
  173. ================
  174. Salt Cloud can maintain a cache of node data, for supported providers. The
  175. following options manage this functionality.
  176. update_cachedir
  177. ~~~~~~~~~~~~~~~
  178. On supported cloud providers, whether or not to maintain a cache of nodes
  179. returned from a --full-query. The data will be stored in ``msgpack`` format
  180. under ``<SALT_CACHEDIR>/cloud/active/<DRIVER>/<PROVIDER>/<NODE_NAME>.p``. This
  181. setting can be True or False.
  182. diff_cache_events
  183. ~~~~~~~~~~~~~~~~~
  184. When the cloud cachedir is being managed, if differences are encountered
  185. between the data that is returned live from the cloud host and the data in
  186. the cache, fire events which describe the changes. This setting can be True or
  187. False.
  188. Some of these events will contain data which describe a node. Because some of
  189. the fields returned may contain sensitive data, the ``cache_event_strip_fields``
  190. configuration option exists to strip those fields from the event return.
  191. .. code-block:: yaml
  192. cache_event_strip_fields:
  193. - password
  194. - priv_key
  195. The following are events that can be fired based on this data.
  196. salt/cloud/minionid/cache_node_new
  197. **********************************
  198. A new node was found on the cloud host which was not listed in the cloud
  199. cachedir. A dict describing the new node will be contained in the event.
  200. salt/cloud/minionid/cache_node_missing
  201. **************************************
  202. A node that was previously listed in the cloud cachedir is no longer available
  203. on the cloud host.
  204. salt/cloud/minionid/cache_node_diff
  205. ***********************************
  206. One or more pieces of data in the cloud cachedir has changed on the cloud
  207. host. A dict containing both the old and the new data will be contained in
  208. the event.
  209. SSH Known Hosts
  210. ===============
  211. Normally when bootstrapping a VM, salt-cloud will ignore the SSH host key. This
  212. is because it does not know what the host key is before starting (because it
  213. doesn't exist yet). If strict host key checking is turned on without the key
  214. in the ``known_hosts`` file, then the host will never be available, and cannot
  215. be bootstrapped.
  216. If a provider is able to determine the host key before trying to bootstrap it,
  217. that provider's driver can add it to the ``known_hosts`` file, and then turn on
  218. strict host key checking. This can be set up in the main cloud configuration
  219. file (normally ``/etc/salt/cloud``) or in the provider-specific configuration
  220. file:
  221. .. code-block:: yaml
  222. known_hosts_file: /path/to/.ssh/known_hosts
  223. If this is not set, it will default to ``/dev/null``, and strict host key
  224. checking will be turned off.
  225. It is highly recommended that this option is *not* set, unless the user has
  226. verified that the provider supports this functionality, and that the image
  227. being used is capable of providing the necessary information. At this time,
  228. only the EC2 driver supports this functionality.
  229. SSH Agent
  230. =========
  231. .. versionadded:: 2015.5.0
  232. If the ssh key is not stored on the server salt-cloud is being run on, set
  233. ssh_agent, and salt-cloud will use the forwarded ssh-agent to authenticate.
  234. .. code-block:: yaml
  235. ssh_agent: True
  236. File Map Upload
  237. ===============
  238. .. versionadded:: 2014.7.0
  239. The ``file_map`` option allows an arbitrary group of files to be uploaded to the
  240. target system before running the deploy script. This functionality requires a
  241. provider uses salt.utils.cloud.bootstrap(), which is currently limited to the ec2,
  242. gce, openstack and nova drivers.
  243. The ``file_map`` can be configured globally in ``/etc/salt/cloud``, or in any cloud
  244. provider or profile file. For example, to upload an extra package or a custom deploy
  245. script, a cloud profile using ``file_map`` might look like:
  246. .. code-block:: yaml
  247. ubuntu14:
  248. provider: ec2-config
  249. image: ami-98aa1cf0
  250. size: t1.micro
  251. ssh_username: root
  252. securitygroup: default
  253. file_map:
  254. /local/path/to/custom/script: /remote/path/to/use/custom/script
  255. /local/path/to/package: /remote/path/to/store/package
  256. Running Pre-Flight Commands
  257. ===========================
  258. .. versionadded:: 2018.3.0
  259. To execute specified preflight shell commands on a VM before the deploy script is
  260. run, use the ``preflight_cmds`` option. These must be defined as a list in a cloud
  261. configuration file. For example:
  262. .. code-block:: yaml
  263. my-cloud-profile:
  264. provider: linode-config
  265. image: Ubuntu 16.04 LTS
  266. size: Linode 2048
  267. preflight_cmds:
  268. - whoami
  269. - echo 'hello world!'
  270. These commands will run in sequence **before** the bootstrap script is executed.
  271. Force Minion Config
  272. ===================
  273. .. versionadded:: 2018.3.0
  274. The ``force_minion_config`` option requests the bootstrap process to overwrite
  275. an existing minion configuration file and public/private key files.
  276. Default: False
  277. This might be important for drivers (such as ``saltify``) which are expected to
  278. take over a connection from a former salt master.
  279. .. code-block:: yaml
  280. my_saltify_provider:
  281. driver: saltify
  282. force_minion_config: true