1
0

syndic.rst 8.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233
  1. .. _syndic:
  2. ===========
  3. Salt Syndic
  4. ===========
  5. The most basic or typical Salt topology consists of a single Master node
  6. controlling a group of Minion nodes. An intermediate node type, called Syndic,
  7. when used offers greater structural flexibility and scalability in the
  8. construction of Salt topologies than topologies constructed only out of Master
  9. and Minion node types.
  10. A Syndic node can be thought of as a special passthrough Minion node. A Syndic
  11. node consists of a ``salt-syndic`` daemon and a ``salt-master`` daemon running
  12. on the same system. The ``salt-master`` daemon running on the Syndic node
  13. controls a group of lower level Minion nodes and the ``salt-syndic`` daemon
  14. connects higher level Master node, sometimes called a Master of Masters.
  15. The ``salt-syndic`` daemon relays publications and events between the Master
  16. node and the local ``salt-master`` daemon. This gives the Master node control
  17. over the Minion nodes attached to the ``salt-master`` daemon running on the
  18. Syndic node.
  19. Configuring the Syndic
  20. ======================
  21. To setup a Salt Syndic you need to tell the Syndic node and its Master node
  22. about each other. If your Master node is located at ``10.10.0.1``, then your
  23. configurations would be:
  24. On the Syndic node:
  25. .. code-block:: yaml
  26. # /etc/salt/master
  27. syndic_master: 10.10.0.1 # may be either an IP address or a hostname
  28. .. code-block:: yaml
  29. # /etc/salt/minion
  30. # id is shared by the salt-syndic daemon and a possible salt-minion daemon
  31. # on the Syndic node
  32. id: my_syndic
  33. On the Master node:
  34. .. code-block:: yaml
  35. # /etc/salt/master
  36. order_masters: True
  37. The :conf_master:`syndic_master` option tells the Syndic node where to find the
  38. Master node in the same way that the :conf_minion:`master` option tells a
  39. Minion node where to find a Master node.
  40. The :conf_minion:`id` option is used by the ``salt-syndic`` daemon to identify
  41. with the Master node and if unset will default to the hostname or IP address of
  42. the Syndic just as with a Minion.
  43. The :conf_master:`order_masters` option configures the Master node to send
  44. extra information with its publications that is needed by Syndic nodes
  45. connected directly to it.
  46. .. note::
  47. Each Syndic must provide its own ``file_roots`` directory. Files will not
  48. be automatically transferred from the Master node.
  49. Configuring the Syndic with Multimaster
  50. =======================================
  51. .. versionadded:: 2015.5.0
  52. Syndic with Multimaster lets you connect a syndic to multiple masters to provide
  53. an additional layer of redundancy in a syndic configuration.
  54. Higher level masters should first be configured in a multimaster configuration.
  55. See :ref:`Multimaster Tutorial <tutorial-multi-master>`.
  56. On the syndic, the :conf_master:`syndic_master` option is populated with
  57. a list of the higher level masters.
  58. Since each syndic is connected to each master, jobs sent from any master are
  59. forwarded to minions that are connected to each syndic. If the ``master_id`` value
  60. is set in the master config on the higher level masters, job results are returned
  61. to the master that originated the request in a best effort fashion. Events/jobs
  62. without a ``master_id`` are returned to any available master.
  63. Running the Syndic
  64. ==================
  65. The ``salt-syndic`` daemon is a separate process that needs to be started in
  66. addition to the ``salt-master`` daemon running on the Syndic node. Starting
  67. the ``salt-syndic`` daemon is the same as starting the other Salt daemons.
  68. The Master node in many ways sees the Syndic as an ordinary Minion node. In
  69. particular, the Master will need to accept the Syndic's Minion key as it would
  70. for any other Minion.
  71. On the Syndic node:
  72. .. code-block:: bash
  73. # salt-syndic
  74. or
  75. # service salt-syndic start
  76. On the Master node:
  77. .. code-block:: bash
  78. # salt-key -a my_syndic
  79. The Master node will now be able to control the Minion nodes connected to the
  80. Syndic. Only the Syndic key will be listed in the Master node's key registry
  81. but this also means that key activity between the Syndic's Minions and the
  82. Syndic does not encumber the Master node. In this way, the Syndic's key on the
  83. Master node can be thought of as a placeholder for the keys of all the Minion
  84. and Syndic nodes beneath it, giving the Master node a clear, high level
  85. structural view on the Salt cluster.
  86. On the Master node:
  87. .. code-block:: bash
  88. # salt-key -L
  89. Accepted Keys:
  90. my_syndic
  91. Denied Keys:
  92. Unaccepted Keys:
  93. Rejected Keys:
  94. # salt '*' test.version
  95. minion_1:
  96. 2018.3.4
  97. minion_2:
  98. 2018.3.4
  99. minion_4:
  100. 2018.3.4
  101. minion_3:
  102. 2018.3.4
  103. Topology
  104. ========
  105. A Master node (a node which is itself not a Syndic to another higher level
  106. Master node) must run a ``salt-master`` daemon and optionally a ``salt-minion``
  107. daemon.
  108. A Syndic node must run ``salt-syndic`` and ``salt-master`` daemons and
  109. optionally a ``salt-minion`` daemon.
  110. A Minion node must run a ``salt-minion`` daemon.
  111. When a ``salt-master`` daemon issues a command, it will be received by the
  112. Syndic and Minion nodes directly connected to it. A Minion node will process
  113. the command in the way it ordinarily would. On a Syndic node, the
  114. ``salt-syndic`` daemon will relay the command to the ``salt-master`` daemon
  115. running on the Syndic node, which then propagates the command to the Minions
  116. and Syndics connected to it.
  117. When events and job return data are generated by ``salt-minion`` daemons, they
  118. are aggregated by the ``salt-master`` daemon they are connected to, which
  119. ``salt-master`` daemon then relays the data back through its ``salt-syndic``
  120. daemon until the data reaches the Master or Syndic node that issued the command.
  121. Syndic wait
  122. ===========
  123. ``syndic_wait`` is a master configuration file setting that specifies the number of
  124. seconds the Salt client should wait for additional syndics to check in with their
  125. lists of expected minions before giving up. This value defaults to ``5`` seconds.
  126. The ``syndic_wait`` setting is necessary because the higher-level master does not
  127. have a way of knowing which minions are below the syndics. The higher-level master
  128. has its own list of expected minions and the masters below them have their own lists
  129. as well, so the Salt client does not how long to wait for all returns. The
  130. ``syndic_wait`` option allows time for all minions to return to the Salt client.
  131. .. note::
  132. To reduce the amount of time the CLI waits for Minions to respond, install
  133. a Minion on the Syndic or tune the value of the ``syndic_wait``
  134. configuration.
  135. While it is possible to run a Syndic without a Minion installed on the same
  136. system, it is recommended, for a faster CLI response time, to do so. Without a
  137. Minion installed on the Syndic node, the timeout value of ``syndic_wait``
  138. increases significantly - about three-fold. With a Minion installed on the
  139. Syndic, the CLI timeout resides at the value defined in ``syndic_wait``.
  140. .. note::
  141. If you have a very large infrastructure or many layers of Syndics, you may
  142. find that the CLI doesn't wait long enough for the Syndics to return their
  143. events. If you think this is the case, you can set the
  144. :conf_master:`syndic_wait` value in the Master configs on the Master or
  145. Syndic nodes from which commands are executed. The default value is ``5``,
  146. and should work for the majority of deployments.
  147. In order for a Master or Syndic node to return information from Minions that
  148. are below their Syndics, the CLI requires a short wait time in order to allow
  149. the Syndics to gather responses from their Minions. This value is defined in
  150. the :conf_master:`syndic_wait` config option and has a default of five seconds.
  151. Syndic config options
  152. =====================
  153. These are the options that can be used to configure a Syndic node. Note that
  154. other than ``id``, Syndic config options are placed in the Master config on the
  155. Syndic node.
  156. - :conf_minion:`id`: Syndic id (shared by the ``salt-syndic`` daemon with a
  157. potential ``salt-minion`` daemon on the same system)
  158. - :conf_master:`syndic_master`: Master node IP address or hostname
  159. - :conf_master:`syndic_master_port`: Master node ret_port
  160. - :conf_master:`syndic_log_file`: path to the logfile (absolute or not)
  161. - :conf_master:`syndic_pidfile`: path to the pidfile (absolute or not)
  162. - :conf_master:`syndic_wait`: time in seconds to wait on returns from this syndic
  163. Minion Data Cache
  164. =================
  165. Beginning with Salt 2016.11.0, the :ref:`Pluggable Minion Data Cache <pluggable-data-cache>`
  166. was introduced. The minion data cache contains the Salt Mine data, minion grains, and minion
  167. pillar information cached on the Salt Master. By default, Salt uses the ``localfs`` cache
  168. module, but other external data stores can be used instead.
  169. Using a pluggable minion cache modules allows for the data stored on a Salt Master about
  170. Salt Minions to be replicated on other Salt Masters the Minion is connected to. Please see
  171. the :ref:`Minion Data Cache <cache>` documentation for more information and configuration
  172. examples.