1
0

architecture.rst 8.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239
  1. .. _salt_architecture:
  2. Overview
  3. ========
  4. In its most typical use, Salt is a software application in which clients,
  5. called "minions" can be commanded and controlled from a central command server
  6. called a "master".
  7. Commands are normally issued to the minions (via the master) by calling a
  8. client script simply called, 'salt'.
  9. Salt features a pluggable transport system to issue commands from a master to
  10. minions. The default transport is ZeroMQ.
  11. Salt Client
  12. ===========
  13. Overview
  14. --------
  15. The salt client is run on the same machine as the Salt Master and communicates
  16. with the salt-master to issue commands and to receive the results and display
  17. them to the user.
  18. The primary abstraction for the salt client is called 'LocalClient'.
  19. When LocalClient wants to publish a command to minions, it connects to the
  20. master by issuing a request to the master's ReqServer (TCP: 4506)
  21. The LocalClient system listens to responses for its requests by listening to
  22. the master event bus publisher (master_event_pub.ipc).
  23. Salt Master
  24. ===========
  25. Overview
  26. --------
  27. The salt-master daemon runs on the designated Salt master and performs
  28. functions such as authenticating minions, sending, and receiving requests
  29. from connected minions and sending and receiving requests and replies to the
  30. 'salt' CLI.
  31. Moving Pieces
  32. -------------
  33. When a Salt master starts up, a number of processes are started, all of which
  34. are called 'salt-master' in a process-list but have various role categories.
  35. Among those categories are:
  36. * Publisher
  37. * EventPublisher
  38. * MWorker
  39. Publisher
  40. ---------
  41. The Publisher process is responsible for sending commands over the designated
  42. transport to connected minions. The Publisher is bound to the following:
  43. * TCP: port 4505
  44. * IPC: publish_pull.ipc
  45. Each salt minion establishes a connection to the master Publisher.
  46. EventPublisher
  47. --------------
  48. The EventPublisher publishes master events out to any event listeners. It is
  49. bound to the following:
  50. * IPC: master_event_pull.ipc
  51. * IPC: master_event_pub.ipc
  52. MWorker
  53. -------
  54. Worker processes manage the back-end operations for the Salt Master.
  55. The number of workers is equivalent to the number of 'worker_threads'
  56. specified in the master configuration and is always at least one.
  57. Workers are bound to the following:
  58. * IPC: workers.ipc
  59. ReqServer
  60. ---------
  61. The Salt request server takes requests and distributes them to available MWorker
  62. processes for processing. It also receives replies back from minions.
  63. The ReqServer is bound to the following:
  64. * TCP: 4506
  65. * IPC: workers.ipc
  66. Each salt minion establishes a connection to the master ReqServer.
  67. Job Flow
  68. --------
  69. The Salt master works by always publishing commands to all connected minions
  70. and the minions decide if the command is meant for them by checking themselves
  71. against the command target.
  72. The typical lifecycle of a salt job from the perspective of the master
  73. might be as follows:
  74. 1) A command is issued on the CLI. For example, 'salt my_minion test.version'.
  75. 2) The 'salt' command uses LocalClient to generate a request to the salt master
  76. by connecting to the ReqServer on TCP:4506 and issuing the job.
  77. 3) The salt-master ReqServer sees the request and passes it to an available
  78. MWorker over workers.ipc.
  79. 4) A worker picks up the request and handles it. First, it checks to ensure
  80. that the requested user has permissions to issue the command. Then, it sends
  81. the publish command to all connected minions. For the curious, this happens
  82. in ClearFuncs.publish().
  83. 5) The worker announces on the master event bus that it is about to publish a
  84. job to connected minions. This happens by placing the event on the master
  85. event bus (master_event_pull.ipc) where the EventPublisher picks it up and
  86. distributes it to all connected event listeners on master_event_pub.ipc.
  87. 6) The message to the minions is encrypted and sent to the Publisher via IPC on
  88. publish_pull.ipc.
  89. 7) Connected minions have a TCP session established with the Publisher on TCP
  90. port 4505 where they await commands. When the Publisher receives the job
  91. over publish_pull, it sends the jobs across the wire to the minions for
  92. processing.
  93. 8) After the minions receive the request, they decrypt it and perform any
  94. requested work, if they determine that they are targeted to do so.
  95. 9) When the minion is ready to respond, it publishes the result of its job back
  96. to the master by sending the encrypted result back to the master on TCP 4506
  97. where it is again picked up by the ReqServer and forwarded to an available
  98. MWorker for processing. (Again, this happens by passing this message across
  99. workers.ipc to an available worker.)
  100. 10) When the MWorker receives the job it decrypts it and fires an event onto
  101. the master event bus (master_event_pull.ipc). (Again for the curious, this
  102. happens in AESFuncs._return().
  103. 11) The EventPublisher sees this event and re-publishes it on the bus to all
  104. connected listeners of the master event bus (on master_event_pub.ipc). This
  105. is where the LocalClient has been waiting, listening to the event bus for
  106. minion replies. It gathers the job and stores the result.
  107. 12) When all targeted minions have replied or the timeout has been exceeded,
  108. the salt client displays the results of the job to the user on the CLI.
  109. Salt Minion
  110. ===========
  111. Overview
  112. --------
  113. The salt-minion is a single process that sits on machines to be managed by
  114. Salt. It can either operate as a stand-alone daemon which accepts commands
  115. locally via 'salt-call' or it can connect back to a master and receive commands
  116. remotely.
  117. When starting up, salt minions connect *back* to a master defined in the minion
  118. config file. They connect to two ports on the master:
  119. * TCP: 4505
  120. This is the connection to the master Publisher. It is on this port that
  121. the minion receives jobs from the master.
  122. * TCP: 4506
  123. This is the connection to the master ReqServer. It is on this port that
  124. the minion sends job results back to the master.
  125. Event System
  126. ------------
  127. Similar to the master, a salt-minion has its own event system that operates
  128. over IPC by default. The minion event system operates on a push/pull system
  129. with IPC files at minion_event_<unique_id>_pub.ipc and
  130. minion_event_<unique_id>_pull.ipc.
  131. The astute reader might ask why have an event bus at all with a single-process
  132. daemon. The answer is that the salt-minion may fork other processes as required
  133. to do the work without blocking the main salt-minion process and this
  134. necessitates a mechanism by which those processes can communicate with each
  135. other. Secondarily, this provides a bus by which any user with sufficient
  136. permissions can read or write to the bus as a common interface with the salt
  137. minion.
  138. Minion Job Flow
  139. ---------------
  140. When a salt minion starts up, it attempts to connect to the Publisher and the
  141. ReqServer on the salt master. It then attempts to authenticate and once the
  142. minion has successfully authenticated, it simply listens for jobs.
  143. Jobs normally come either come from the 'salt-call' script run by a local user
  144. on the salt minion or they can come directly from a master.
  145. The job flow on a minion, coming from the master via a 'salt' command is as
  146. follows:
  147. 1) A master publishes a job that is received by a minion as outlined by the
  148. master's job flow above.
  149. 2) The minion is polling its receive socket that's connected to the master
  150. Publisher (TCP 4505 on master). When it detects an incoming message, it picks it
  151. up from the socket and decrypts it.
  152. 3) A new minion process or thread is created and provided with the contents of the
  153. decrypted message. The _thread_return() method is provided with the contents of
  154. the received message.
  155. 4) The new minion thread is created. The _thread_return() function starts up
  156. and actually calls out to the requested function contained in the job.
  157. 5) The requested function runs and returns a result. [Still in thread.]
  158. 6) The result of the function that's run is encrypted and returned to the
  159. master's ReqServer (TCP 4506 on master). [Still in thread.]
  160. 7) Thread exits. Because the main thread was only blocked for the time that it
  161. took to initialize the worker thread, many other requests could have been
  162. received and processed during this time.
  163. A Note on ClearFuncs vs. AESFuncs
  164. =================================
  165. A common source of confusion is determining when messages are passed in the
  166. clear and when they are passed using encryption. There are two rules governing
  167. this behaviour:
  168. 1) ClearFuncs is used for intra-master communication and during the initial
  169. authentication handshake between a minion and master during the key exchange.
  170. 2) AESFuncs is used everywhere else.