1
0

intro_scale.rst 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307
  1. .. _tutorial-salt-at-scale:
  2. ===================
  3. Using Salt at scale
  4. ===================
  5. The focus of this tutorial will be building a Salt infrastructure for handling
  6. large numbers of minions. This will include tuning, topology, and best practices.
  7. For how to install the Salt Master please
  8. go here: `Installing saltstack <http://docs.saltstack.com/topics/installation/index.html>`_
  9. .. note::
  10. This tutorial is intended for large installations, although these same settings
  11. won't hurt, it may not be worth the complexity to smaller installations.
  12. When used with minions, the term 'many' refers to at least a thousand
  13. and 'a few' always means 500.
  14. For simplicity reasons, this tutorial will default to the standard ports
  15. used by Salt.
  16. The Master
  17. ==========
  18. The most common problems on the Salt Master are:
  19. 1. too many minions authing at once
  20. 2. too many minions re-authing at once
  21. 3. too many minions re-connecting at once
  22. 4. too many minions returning at once
  23. 5. too few resources (CPU/HDD)
  24. The first three are all "thundering herd" problems. To mitigate these issues
  25. we must configure the minions to back-off appropriately when the Master is
  26. under heavy load.
  27. The fourth is caused by masters with little hardware resources in combination
  28. with a possible bug in ZeroMQ. At least that's what it looks like till today
  29. (`Issue 118651 <https://github.com/saltstack/salt/issues/11865>`_,
  30. `Issue 5948 <https://github.com/saltstack/salt/issues/5948>`_,
  31. `Mail thread <https://groups.google.com/forum/#!searchin/salt-users/lots$20of$20minions/salt-users/WxothArv2Do/t12MigMQDFAJ>`_)
  32. To fully understand each problem, it is important to understand, how Salt works.
  33. Very briefly, the Salt Master offers two services to the minions.
  34. - a job publisher on port 4505
  35. - an open port 4506 to receive the minions returns
  36. All minions are always connected to the publisher on port 4505 and only connect
  37. to the open return port 4506 if necessary. On an idle Master, there will only
  38. be connections on port 4505.
  39. Too many minions authing
  40. ------------------------
  41. When the Minion service is first started up, it will connect to its Master's publisher
  42. on port 4505. If too many minions are started at once, this can cause a "thundering herd".
  43. This can be avoided by not starting too many minions at once.
  44. The connection itself usually isn't the culprit, the more likely cause of master-side
  45. issues is the authentication that the Minion must do with the Master. If the Master
  46. is too heavily loaded to handle the auth request it will time it out. The Minion
  47. will then wait `acceptance_wait_time` to retry. If `acceptance_wait_time_max` is
  48. set then the Minion will increase its wait time by the `acceptance_wait_time` each
  49. subsequent retry until reaching `acceptance_wait_time_max`.
  50. Too many minions re-authing
  51. ---------------------------
  52. This is most likely to happen in the testing phase of a Salt deployment, when
  53. all Minion keys have already been accepted, but the framework is being tested
  54. and parameters are frequently changed in the Salt Master's configuration
  55. file(s).
  56. The Salt Master generates a new AES key to encrypt its publications at certain
  57. events such as a Master restart or the removal of a Minion key. If you are
  58. encountering this problem of too many minions re-authing against the Master,
  59. you will need to recalibrate your setup to reduce the rate of events like a
  60. Master restart or Minion key removal (``salt-key -d``).
  61. When the Master generates a new AES key, the minions aren't notified of this
  62. but will discover it on the next pub job they receive. When the Minion
  63. receives such a job it will then re-auth with the Master. Since Salt does
  64. minion-side filtering this means that all the minions will re-auth on the next
  65. command published on the master-- causing another "thundering herd". This can
  66. be avoided by setting the
  67. .. code-block:: yaml
  68. random_reauth_delay: 60
  69. in the minions configuration file to a higher value and stagger the amount
  70. of re-auth attempts. Increasing this value will of course increase the time
  71. it takes until all minions are reachable via Salt commands.
  72. Too many minions re-connecting
  73. ------------------------------
  74. By default the zmq socket will re-connect every 100ms which for some larger
  75. installations may be too quick. This will control how quickly the TCP session is
  76. re-established, but has no bearing on the auth load.
  77. To tune the minions sockets reconnect attempts, there are a few values in
  78. the sample configuration file (default values)
  79. .. code-block:: yaml
  80. recon_default: 1000
  81. recon_max: 5000
  82. recon_randomize: True
  83. - recon_default: the default value the socket should use, i.e. 1000. This value is in
  84. milliseconds. (1000ms = 1 second)
  85. - recon_max: the max value that the socket should use as a delay before trying to reconnect
  86. This value is in milliseconds. (5000ms = 5 seconds)
  87. - recon_randomize: enables randomization between recon_default and recon_max
  88. To tune this values to an existing environment, a few decision have to be made.
  89. 1. How long can one wait, before the minions should be online and reachable via Salt?
  90. 2. How many reconnects can the Master handle without a syn flood?
  91. These questions can not be answered generally. Their answers depend on the
  92. hardware and the administrators requirements.
  93. Here is an example scenario with the goal, to have all minions reconnect
  94. within a 60 second time-frame on a Salt Master service restart.
  95. .. code-block:: yaml
  96. recon_default: 1000
  97. recon_max: 59000
  98. recon_randomize: True
  99. Each Minion will have a randomized reconnect value between 'recon_default'
  100. and 'recon_default + recon_max', which in this example means between 1000ms
  101. and 60000ms (or between 1 and 60 seconds). The generated random-value will
  102. be doubled after each attempt to reconnect (ZeroMQ default behavior).
  103. Lets say the generated random value is 11 seconds (or 11000ms).
  104. .. code-block:: bash
  105. reconnect 1: wait 11 seconds
  106. reconnect 2: wait 22 seconds
  107. reconnect 3: wait 33 seconds
  108. reconnect 4: wait 44 seconds
  109. reconnect 5: wait 55 seconds
  110. reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
  111. reconnect 7: wait 11 seconds
  112. reconnect 8: wait 22 seconds
  113. reconnect 9: wait 33 seconds
  114. reconnect x: etc.
  115. With a thousand minions this will mean
  116. .. code-block:: text
  117. 1000/60 = ~16
  118. round about 16 connection attempts a second. These values should be altered to
  119. values that match your environment. Keep in mind though, that it may grow over
  120. time and that more minions might raise the problem again.
  121. Too many minions returning at once
  122. ----------------------------------
  123. This can also happen during the testing phase, if all minions are addressed at
  124. once with
  125. .. code-block:: bash
  126. $ salt * disk.usage
  127. it may cause thousands of minions trying to return their data to the Salt Master
  128. open port 4506. Also causing a flood of syn-flood if the Master can't handle that many
  129. returns at once.
  130. This can be easily avoided with Salt's batch mode:
  131. .. code-block:: bash
  132. $ salt * disk.usage -b 50
  133. This will only address 50 minions at once while looping through all addressed
  134. minions.
  135. Too few resources
  136. =================
  137. The masters resources always have to match the environment. There is no way
  138. to give good advise without knowing the environment the Master is supposed to
  139. run in. But here are some general tuning tips for different situations:
  140. The Master is CPU bound
  141. -----------------------
  142. Salt uses RSA-Key-Pairs on the masters and minions end. Both generate 4096
  143. bit key-pairs on first start. While the key-size for the Master is currently
  144. not configurable, the minions keysize can be configured with different
  145. key-sizes. For example with a 2048 bit key:
  146. .. code-block:: yaml
  147. keysize: 2048
  148. With thousands of decryptions, the amount of time that can be saved on the
  149. masters end should not be neglected. See here for reference:
  150. `Pull Request 9235 <https://github.com/saltstack/salt/pull/9235>`_ how much
  151. influence the key-size can have.
  152. Downsizing the Salt Master's key is not that important, because the minions
  153. do not encrypt as many messages as the Master does.
  154. In installations with large or with complex pillar files, it is possible
  155. for the master to exhibit poor performance as a result of having to render
  156. many pillar files at once. This exhibit itself in a number of ways, both
  157. as high load on the master and on minions which block on waiting for their
  158. pillar to be delivered to them.
  159. To reduce pillar rendering times, it is possible to cache pillars on the
  160. master. To do this, see the set of master configuration options which
  161. are prefixed with `pillar_cache`.
  162. If many pillars are encrypted using :mod:`gpg <salt.renderers.gpg>` renderer, it
  163. is possible to cache GPG data. To do this, see the set of master configuration
  164. options which are prefixed with `gpg_cache`.
  165. .. note::
  166. Caching pillars or GPG data on the master may introduce security
  167. considerations. Be certain to read caveats outlined in the master
  168. configuration file to understand how pillar caching may affect a master's
  169. ability to protect sensitive data!
  170. The Master is disk IO bound
  171. ---------------------------
  172. By default, the Master saves every Minion's return for every job in its
  173. job-cache. The cache can then be used later, to lookup results for previous
  174. jobs. The default directory for this is:
  175. .. code-block:: yaml
  176. cachedir: /var/cache/salt
  177. and then in the ``/proc`` directory.
  178. Each job return for every Minion is saved in a single file. Over time this
  179. directory can grow quite large, depending on the number of published jobs. The
  180. amount of files and directories will scale with the number of jobs published and
  181. the retention time defined by
  182. .. code-block:: yaml
  183. keep_jobs: 24
  184. .. code-block:: text
  185. 250 jobs/day * 2000 minions returns = 500,000 files a day
  186. Use and External Job Cache
  187. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  188. An external job cache allows for job storage to be placed on an external
  189. system, such as a database.
  190. - ext_job_cache: this will have the minions store their return data directly
  191. into a returner (not sent through the Master)
  192. - master_job_cache (New in `2014.7.0`): this will make the Master store the job
  193. data using a returner (instead of the local job cache on disk).
  194. If a master has many accepted keys, it may take a long time to publish a job
  195. because the master must first determine the matching minions and deliver
  196. that information back to the waiting client before the job can be published.
  197. To mitigate this, a key cache may be enabled. This will reduce the load
  198. on the master to a single file open instead of thousands or tens of thousands.
  199. This cache is updated by the maintanence process, however, which means that
  200. minions with keys that are accepted may not be targeted by the master
  201. for up to sixty seconds by default.
  202. To enable the master key cache, set `key_cache: 'sched'` in the master
  203. configuration file.
  204. Disable The Job Cache
  205. ~~~~~~~~~~~~~~~~~~~~~
  206. The job cache is a central component of the Salt Master and many aspects of
  207. the Salt Master will not function correctly without a running job cache.
  208. Disabling the job cache is **STRONGLY DISCOURAGED** and should not be done
  209. unless the master is being used to execute routines that require no history
  210. or reliable feedback!
  211. The job cache can be disabled:
  212. .. code-block:: yaml
  213. job_cache: False