1
0

multimaster.rst 6.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167
  1. .. _tutorial-multi-master:
  2. =====================
  3. Multi Master Tutorial
  4. =====================
  5. As of Salt 0.16.0, the ability to connect minions to multiple masters has been
  6. made available. The multi-master system allows for redundancy of Salt
  7. masters and facilitates multiple points of communication out to minions. When
  8. using a multi-master setup, all masters are running hot, and any active master
  9. can be used to send commands out to the minions.
  10. .. note::
  11. If you need failover capabilities with multiple masters, there is also a
  12. MultiMaster-PKI setup available, that uses a different topology
  13. `MultiMaster-PKI with Failover Tutorial <http://docs.saltstack.com/en/latest/topics/tutorials/multimaster_pki.html>`_
  14. In 0.16.0, the masters do not share any information, keys need to be accepted
  15. on both masters, and shared files need to be shared manually or use tools like
  16. the git fileserver backend to ensure that the :conf_master:`file_roots` are
  17. kept consistent.
  18. Beginning with Salt 2016.11.0, the :ref:`Pluggable Minion Data Cache <pluggable-data-cache>`
  19. was introduced. The minion data cache contains the Salt Mine data, minion grains, and minion
  20. pillar information cached on the Salt Master. By default, Salt uses the ``localfs`` cache
  21. module, but other external data stores can be used instead.
  22. Using a pluggable minion cache modules allows for the data stored on a Salt Master about
  23. Salt Minions to be replicated on other Salt Masters the Minion is connected to. Please see
  24. the :ref:`Minion Data Cache <cache>` documentation for more information and configuration
  25. examples.
  26. Summary of Steps
  27. ----------------
  28. 1. Create a redundant master server
  29. 2. Copy primary master key to redundant master
  30. 3. Start redundant master
  31. 4. Configure minions to connect to redundant master
  32. 5. Restart minions
  33. 6. Accept keys on redundant master
  34. Prepping a Redundant Master
  35. ---------------------------
  36. The first task is to prepare the redundant master. If the redundant master is
  37. already running, stop it. There is only one requirement when preparing a
  38. redundant master, which is that masters share the same private key. When the
  39. first master was created, the master's identifying key pair was generated and
  40. placed in the master's ``pki_dir``. The default location of the master's key
  41. pair is ``/etc/salt/pki/master/``. Take the private key, ``master.pem``, and
  42. copy it to the same location on the redundant master. Do the same for the
  43. master's public key, ``master.pub``. Assuming that no minions have yet been
  44. connected to the new redundant master, it is safe to delete any existing key
  45. in this location and replace it.
  46. .. note::
  47. There is no logical limit to the number of redundant masters that can be
  48. used.
  49. Once the new key is in place, the redundant master can be safely started.
  50. Configure Minions
  51. -----------------
  52. Since minions need to be master-aware, the new master needs to be added to the
  53. minion configurations. Simply update the minion configurations to list all
  54. connected masters:
  55. .. code-block:: yaml
  56. master:
  57. - saltmaster1.example.com
  58. - saltmaster2.example.com
  59. Now the minion can be safely restarted.
  60. .. note::
  61. If the ipc_mode for the minion is set to TCP (default in Windows), then
  62. each minion in the multi-minion setup (one per master) needs its own
  63. tcp_pub_port and tcp_pull_port.
  64. If these settings are left as the default 4510/4511, each minion object
  65. will receive a port 2 higher than the previous. Thus the first minion will
  66. get 4510/4511, the second will get 4512/4513, and so on. If these port
  67. decisions are unacceptable, you must configure tcp_pub_port and
  68. tcp_pull_port with lists of ports for each master. The length of these
  69. lists should match the number of masters, and there should not be overlap
  70. in the lists.
  71. Now the minions will check into the original master and also check into the new
  72. redundant master. Both masters are first-class and have rights to the minions.
  73. .. note::
  74. Minions can automatically detect failed masters and attempt to reconnect
  75. to reconnect to them quickly. To enable this functionality, set
  76. `master_alive_interval` in the minion config and specify a number of
  77. seconds to poll the masters for connection status.
  78. If this option is not set, minions will still reconnect to failed masters
  79. but the first command sent after a master comes back up may be lost while
  80. the minion authenticates.
  81. Sharing Files Between Masters
  82. -----------------------------
  83. Salt does not automatically share files between multiple masters. A number of
  84. files should be shared or sharing of these files should be strongly considered.
  85. Minion Keys
  86. ```````````
  87. Minion keys can be accepted the normal way using :strong:`salt-key` on both
  88. masters. Keys accepted, deleted, or rejected on one master will NOT be
  89. automatically managed on redundant masters; this needs to be taken care of by
  90. running salt-key on both masters or sharing the
  91. ``/etc/salt/pki/master/{minions,minions_pre,minions_rejected}`` directories
  92. between masters.
  93. .. note::
  94. While sharing the :strong:`/etc/salt/pki/master` directory will work, it is
  95. strongly discouraged, since allowing access to the :strong:`master.pem` key
  96. outside of Salt creates a *SERIOUS* security risk.
  97. File_Roots
  98. ``````````
  99. The :conf_master:`file_roots` contents should be kept consistent between
  100. masters. Otherwise state runs will not always be consistent on minions since
  101. instructions managed by one master will not agree with other masters.
  102. The recommended way to sync these is to use a fileserver backend like gitfs or
  103. to keep these files on shared storage.
  104. .. important::
  105. If using gitfs/git_pillar with the cachedir shared between masters using
  106. `GlusterFS`_, nfs, or another network filesystem, and the masters are
  107. running Salt 2015.5.9 or later, it is strongly recommended not to turn off
  108. :conf_master:`gitfs_global_lock`/:conf_master:`git_pillar_global_lock` as
  109. doing so will cause lock files to be removed if they were created by a
  110. different master.
  111. .. _GlusterFS: http://www.gluster.org/
  112. Pillar_Roots
  113. ````````````
  114. Pillar roots should be given the same considerations as
  115. :conf_master:`file_roots`.
  116. Master Configurations
  117. `````````````````````
  118. While reasons may exist to maintain separate master configurations, it is wise
  119. to remember that each master maintains independent control over minions.
  120. Therefore, access controls should be in sync between masters unless a valid
  121. reason otherwise exists to keep them inconsistent.
  122. These access control options include but are not limited to:
  123. - external_auth
  124. - publisher_acl
  125. - peer
  126. - peer_run