index.rst 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634
  1. .. _network-automation:
  2. ==================
  3. Network Automation
  4. ==================
  5. Network automation is a continuous process of automating the configuration,
  6. management and operations of a computer network. Although the abstraction
  7. could be compared with the operations on the server side, there are many particular
  8. challenges, the most important being that a network device is traditionally
  9. closed hardware able to run proprietary software only. In other words,
  10. the user is not able to install the salt-minion package directly on a
  11. traditional network device. For these reasons, most network devices can be
  12. controlled only remotely via :ref:`proxy minions <proxy-minion>` or
  13. using the :ref:`Salt SSH <salt-ssh>`. However, there are also vendors producing
  14. whitebox equipment (e.g. Arista, Cumulus) or others that have moved the
  15. operating system in the container (e.g. Cisco NX-OS, Cisco IOS-XR),
  16. allowing the salt-minion to be installed directly on the platform.
  17. New in Carbon (2016.11)
  18. -----------------------
  19. The methodologies for network automation have been introduced in
  20. :ref:`2016.11.0 <release-2016-11-0-network-automation-napalm>`. Network
  21. automation support is based on proxy minions.
  22. - :mod:`NAPALM proxy <salt.proxy.napalm>`
  23. - :mod:`Junos proxy<salt.proxy.junos>`
  24. - :mod:`Cisco NXOS <salt.proxy.nxos>`
  25. - :mod:`Cisco NOS <salt.proxy.cisconso>`
  26. NAPALM
  27. ------
  28. NAPALM (Network Automation and Programmability Abstraction Layer with
  29. Multivendor support) is an opensourced Python library that implements a set of
  30. functions to interact with different router vendor devices using a unified API.
  31. Being vendor-agnostic simplifies operations, as the configuration and
  32. interaction with the network device does not rely on a particular vendor.
  33. .. image:: /_static/napalm_logo.png
  34. Beginning with 2017.7.0, the NAPALM modules have been transformed so they can
  35. run in both proxy and regular minions. That means, if the operating system
  36. allows, the salt-minion package can be installed directly on the network gear.
  37. The interface between the network operating system and Salt in that case would
  38. be the corresponding NAPALM sub-package.
  39. For example, if the user installs the
  40. salt-minion on a Arista switch, the only requirement is
  41. `napalm-eos <https://github.com/napalm-automation/napalm-eos>`_.
  42. The following modules are available in 2017.7.0:
  43. - :mod:`NAPALM grains <salt.grains.napalm>`
  44. - :mod:`NET execution module <salt.modules.napalm_network>` - Networking basic
  45. features
  46. - :mod:`NTP execution module <salt.modules.napalm_ntp>`
  47. - :mod:`BGP execution module <salt.modules.napalm_bgp>`
  48. - :mod:`Routes execution module <salt.modules.napalm_route>`
  49. - :mod:`SNMP execution module <salt.modules.napalm_snmp>`
  50. - :mod:`Users execution module <salt.modules.napalm_users>`
  51. - :mod:`Probes execution module <salt.modules.napalm_probes>`
  52. - :mod:`NTP peers management state <salt.states.netntp>`
  53. - :mod:`SNMP configuration management state <salt.states.netsnmp>`
  54. - :mod:`Users management state <salt.states.netusers>`
  55. - :mod:`Netconfig state module <salt.states.netconfig>` - Manage the configuration
  56. of network devices using arbitrary templates and the Salt-specific
  57. advanced templating methodologies.
  58. - :mod:`Network ACL execution module <salt.modules.napalm_acl>` - Generate and
  59. load ACL (firewall) configuration on network devices.
  60. - :mod:`Network ACL state <salt.states.netacl>` - Manage the firewall
  61. configuration. It only requires writing the pillar structure correctly!
  62. - :mod:`NAPALM YANG execution module <salt.modules.napalm_yang_mod>` - Parse,
  63. generate and load native device configuration in a standard way,
  64. using the OpenConfig/IETF models. This module contains also helpers for
  65. the states.
  66. - :mod:`NAPALM YANG state module <salt.states.netyang>` - Manage the
  67. network device configuration according to the YANG models (OpenConfig or IETF).
  68. - :mod:`NET finder <salt.runners.net>` - Runner to find details easily and
  69. fast. It's smart enough to know what you are looking for. It will search
  70. in the details of the network interfaces, IP addresses, MAC address tables,
  71. ARP tables and LLDP neighbors.
  72. - :mod:`BGP finder <salt.runners.bgp>` - Runner to search BGP neighbors details.
  73. - :mod:`NAPALM syslog <salt.engines.napalm_syslog>` - Engine to import events
  74. from the napalm-logs library into the Salt event bus. The events are based
  75. on the syslog messages from the network devices and structured following
  76. the OpenConfig/IETF YANG models.
  77. - :mod:`NAPALM Helpers <salt.modules.napalm>` - Generic helpers for
  78. NAPALM-related operations. For example, the
  79. :mod:`Compliance report <salt.modules.napalm.compliance_report>` function
  80. can be used inside the state modules to compare the expected and the
  81. existing configuration.
  82. Getting started
  83. ###############
  84. Install NAPALM - follow the notes_ and check the platform-specific dependencies_.
  85. .. _notes: http://napalm.readthedocs.io/en/latest/installation/index.html
  86. .. _dependencies: http://napalm.readthedocs.io/en/latest/installation/index.html#dependencies
  87. Salt's Pillar system is ideally suited for configuring proxy-minions
  88. (though they can be configured in /etc/salt/proxy as well). Proxies
  89. can either be designated via a pillar file in :conf_master:`pillar_roots`,
  90. or through an external pillar.
  91. External pillars afford the opportunity for interfacing with
  92. a configuration management system, database, or other knowledgeable system
  93. that may already contain all the details of proxy targets. To use static files
  94. in :conf_master:`pillar_roots`, pattern your files after the following examples:
  95. ``/etc/salt/pillar/top.sls``
  96. .. code-block:: yaml
  97. base:
  98. router1:
  99. - router1
  100. router2:
  101. - router2
  102. switch1:
  103. - switch1
  104. switch2:
  105. - switch2
  106. cpe1:
  107. - cpe1
  108. ``/etc/salt/pillar/router1.sls``
  109. .. code-block:: yaml
  110. proxy:
  111. proxytype: napalm
  112. driver: junos
  113. host: r1.bbone.as1234.net
  114. username: my_username
  115. password: my_password
  116. ``/etc/salt/pillar/router2.sls``
  117. .. code-block:: yaml
  118. proxy:
  119. proxytype: napalm
  120. driver: iosxr
  121. host: r2.bbone.as1234.net
  122. username: my_username
  123. password: my_password
  124. optional_args:
  125. port: 22022
  126. ``/etc/salt/pillar/switch1.sls``
  127. .. code-block:: yaml
  128. proxy:
  129. proxytype: napalm
  130. driver: eos
  131. host: sw1.bbone.as1234.net
  132. username: my_username
  133. password: my_password
  134. optional_args:
  135. enable_password: my_secret
  136. ``/etc/salt/pillar/switch2.sls``
  137. .. code-block:: yaml
  138. proxy:
  139. proxytype: napalm
  140. driver: nxos
  141. host: sw2.bbone.as1234.net
  142. username: my_username
  143. password: my_password
  144. ``/etc/salt/pillar/cpe1.sls``
  145. .. code-block:: yaml
  146. proxy:
  147. proxytype: napalm
  148. driver: ios
  149. host: cpe1.edge.as1234.net
  150. username: ''
  151. password: ''
  152. optional_args:
  153. use_keys: True
  154. auto_rollback_on_error: True
  155. CLI examples
  156. ############
  157. Display the complete running configuration on ``router1``:
  158. .. code-block:: bash
  159. $ sudo salt 'router1' net.config source='running'
  160. Retrieve the NTP servers configured on all devices:
  161. .. code-block:: yaml
  162. $ sudo salt '*' ntp.servers
  163. router1:
  164. ----------
  165. comment:
  166. out:
  167. - 1.2.3.4
  168. result:
  169. True
  170. cpe1:
  171. ----------
  172. comment:
  173. out:
  174. - 1.2.3.4
  175. result:
  176. True
  177. switch2:
  178. ----------
  179. comment:
  180. out:
  181. - 1.2.3.4
  182. result:
  183. True
  184. router2:
  185. ----------
  186. comment:
  187. out:
  188. - 1.2.3.4
  189. result:
  190. True
  191. switch1:
  192. ----------
  193. comment:
  194. out:
  195. - 1.2.3.4
  196. result:
  197. True
  198. Display the ARP tables on all Cisco devices running IOS-XR 5.3.3:
  199. .. code-block:: bash
  200. $ sudo salt -G 'os:iosxr and version:5.3.3' net.arp
  201. Return operational details for interfaces from Arista switches:
  202. .. code-block:: bash
  203. $ sudo salt -C 'sw* and os:eos' net.interfaces
  204. Execute traceroute from the edge of the network:
  205. .. code-block:: bash
  206. $ sudo salt 'router*' net.traceroute 8.8.8.8 vrf='CUSTOMER1-VRF'
  207. Verbatim display from the CLI of Juniper routers:
  208. .. code-block:: bash
  209. $ sudo salt -C 'router* and G@os:junos' net.cli 'show version and haiku'
  210. Retrieve the results of the RPM probes configured on Juniper MX960 routers:
  211. .. code-block:: bash
  212. $ sudo salt -C 'router* and G@os:junos and G@model:MX960' probes.results
  213. Return the list of configured users on the CPEs:
  214. .. code-block:: bash
  215. $ sudo salt 'cpe*' users.config
  216. Using the :mod:`BGP finder <salt.runners.bgp>`, return the list of BGP neighbors
  217. that are down:
  218. .. code-block:: bash
  219. $ sudo salt-run bgp.neighbors up=False
  220. Using the :mod:`NET finder <salt.runners.net>`, determine the devices containing
  221. the pattern "PX-1234-LHR" in their interface description:
  222. .. code-block:: bash
  223. $ sudo salt-run net.find PX-1234-LHR
  224. Cross-platform configuration management example: NTP
  225. ####################################################
  226. Assuming that the user adds the following two lines under
  227. :conf_master:`file_roots`:
  228. .. code-block:: yaml
  229. file_roots:
  230. base:
  231. - /etc/salt/pillar/
  232. - /etc/salt/templates/
  233. - /etc/salt/states/
  234. Define the list of NTP peers and servers wanted:
  235. ``/etc/salt/pillar/ntp.sls``
  236. .. code-block:: yaml
  237. ntp.servers:
  238. - 1.2.3.4
  239. - 5.6.7.8
  240. ntp.peers:
  241. - 10.11.12.13
  242. - 14.15.16.17
  243. Include the new file: for example, if we want to have the same NTP servers on all
  244. network devices, we can add the following line inside the ``top.sls`` file:
  245. .. code-block:: yaml
  246. '*':
  247. - ntp
  248. ``/etc/salt/pillar/top.sls``
  249. .. code-block:: yaml
  250. base:
  251. '*':
  252. - ntp
  253. router1:
  254. - router1
  255. router2:
  256. - router2
  257. switch1:
  258. - switch1
  259. switch2:
  260. - switch2
  261. cpe1:
  262. - cpe1
  263. Or include only where needed:
  264. ``/etc/salt/pillar/top.sls``
  265. .. code-block:: yaml
  266. base:
  267. router1:
  268. - router1
  269. - ntp
  270. router2:
  271. - router2
  272. - ntp
  273. switch1:
  274. - switch1
  275. switch2:
  276. - switch2
  277. cpe1:
  278. - cpe1
  279. Define the cross-vendor template:
  280. ``/etc/salt/templates/ntp.jinja``
  281. .. code-block:: jinja
  282. {%- if grains.vendor|lower == 'cisco' %}
  283. no ntp
  284. {%- for server in servers %}
  285. ntp server {{ server }}
  286. {%- endfor %}
  287. {%- for peer in peers %}
  288. ntp peer {{ peer }}
  289. {%- endfor %}
  290. {%- elif grains.os|lower == 'junos' %}
  291. system {
  292. replace:
  293. ntp {
  294. {%- for server in servers %}
  295. server {{ server }};
  296. {%- endfor %}
  297. {%- for peer in peers %}
  298. peer {{ peer }};
  299. {%- endfor %}
  300. }
  301. }
  302. {%- endif %}
  303. Define the SLS state file, making use of the
  304. :mod:`Netconfig state module <salt.states.netconfig>`:
  305. ``/etc/salt/states/router/ntp.sls``
  306. .. code-block:: yaml
  307. ntp_config_example:
  308. netconfig.managed:
  309. - template_name: salt://ntp.jinja
  310. - peers: {{ pillar.get('ntp.peers', []) | json }}
  311. - servers: {{ pillar.get('ntp.servers', []) | json }}
  312. Run the state and assure NTP configuration consistency across your
  313. multi-vendor network:
  314. .. code-block:: bash
  315. $ sudo salt 'router*' state.sls router.ntp
  316. Besides CLI, the state can be scheduled or executed when triggered by a certain
  317. event.
  318. JUNOS
  319. -----
  320. Juniper has developed a Junos specific proxy infrastructure which allows
  321. remote execution and configuration management of Junos devices without
  322. having to install SaltStack on the device. The infrastructure includes:
  323. - :mod:`Junos proxy <salt.proxy.junos>`
  324. - :mod:`Junos execution module <salt.modules.junos>`
  325. - :mod:`Junos state module <salt.states.junos>`
  326. - :mod:`Junos syslog engine <salt.engines.junos_syslog>`
  327. The execution and state modules are implemented using junos-eznc (PyEZ).
  328. Junos PyEZ is a microframework for Python that enables you to remotely manage
  329. and automate devices running the Junos operating system.
  330. Getting started
  331. ###############
  332. Install PyEZ on the system which will run the Junos proxy minion.
  333. It is required to run Junos specific modules.
  334. .. code-block:: shell
  335. pip install junos-eznc
  336. Next, set the master of the proxy minions.
  337. ``/etc/salt/proxy``
  338. .. code-block:: yaml
  339. master: <master_ip>
  340. Add the details of the Junos device. Device details are usually stored in
  341. salt pillars. If the you do not wish to store credentials in the pillar,
  342. one can setup passwordless ssh.
  343. ``/srv/pillar/vmx_details.sls``
  344. .. code-block:: yaml
  345. proxy:
  346. proxytype: junos
  347. host: <hostip>
  348. username: user
  349. passwd: secret123
  350. Map the pillar file to the proxy minion. This is done in the top file.
  351. ``/srv/pillar/top.sls``
  352. .. code-block:: yaml
  353. base:
  354. vmx:
  355. - vmx_details
  356. .. note::
  357. Before starting the Junos proxy make sure that netconf is enabled on the
  358. Junos device. This can be done by adding the following configuration on
  359. the Junos device.
  360. .. code-block:: shell
  361. set system services netconf ssh
  362. Start the salt master.
  363. .. code-block:: bash
  364. salt-master -l debug
  365. Then start the salt proxy.
  366. .. code-block:: bash
  367. salt-proxy --proxyid=vmx -l debug
  368. Once the master and junos proxy minion have started, we can run execution
  369. and state modules on the proxy minion. Below are few examples.
  370. CLI examples
  371. ############
  372. For detailed documentation of all the junos execution modules refer:
  373. :mod:`Junos execution module <salt.modules.junos>`
  374. Display device facts.
  375. .. code-block:: bash
  376. $ sudo salt 'vmx' junos.facts
  377. Refresh the Junos facts. This function will also refresh the facts which are
  378. stored in salt grains. (Junos proxy stores Junos facts in the salt grains)
  379. .. code-block:: bash
  380. $ sudo salt 'vmx' junos.facts_refresh
  381. Call an RPC.
  382. .. code-block:: bash
  383. $ sudo salt 'vmx' junos.rpc 'get-interface-information' '/var/log/interface-info.txt' terse=True
  384. Install config on the device.
  385. .. code-block:: bash
  386. $ sudo salt 'vmx' junos.install_config 'salt://my_config.set'
  387. Shutdown the junos device.
  388. .. code-block:: bash
  389. $ sudo salt 'vmx' junos.shutdown shutdown=True in_min=10
  390. State file examples
  391. ###################
  392. For detailed documentation of all the junos state modules refer:
  393. :mod:`Junos state module <salt.states.junos>`
  394. Executing an RPC on Junos device and storing the output in a file.
  395. ``/srv/salt/rpc.sls``
  396. .. code-block:: yaml
  397. get-interface-information:
  398. junos:
  399. - rpc
  400. - dest: /home/user/rpc.log
  401. - interface_name: lo0
  402. Lock the junos device, load the configuration, commit it and unlock
  403. the device.
  404. ``/srv/salt/load.sls``
  405. .. code-block:: yaml
  406. lock the config:
  407. junos.lock
  408. salt://configs/my_config.set:
  409. junos:
  410. - install_config
  411. - timeout: 100
  412. - diffs_file: 'var/log/diff'
  413. commit the changes:
  414. junos:
  415. - commit
  416. unlock the config:
  417. junos.unlock
  418. According to the device personality install appropriate image on the device.
  419. ``/srv/salt/image_install.sls``
  420. .. code-block:: jinja
  421. {% if grains['junos_facts']['personality'] == MX %}
  422. salt://images/mx_junos_image.tgz:
  423. junos:
  424. - install_os
  425. - timeout: 100
  426. - reboot: True
  427. {% elif grains['junos_facts']['personality'] == EX %}
  428. salt://images/ex_junos_image.tgz:
  429. junos:
  430. - install_os
  431. - timeout: 150
  432. {% elif grains['junos_facts']['personality'] == SRX %}
  433. salt://images/srx_junos_image.tgz:
  434. junos:
  435. - install_os
  436. - timeout: 150
  437. {% endif %}
  438. Junos Syslog Engine
  439. ###################
  440. :mod:`Junos Syslog Engine <salt.engines.junos_syslog>` is a Salt engine
  441. which receives data from various Junos devices, extracts event information and
  442. forwards it on the master/minion event bus. To start the engine on the salt
  443. master, add the following configuration in the master config file.
  444. The engine can also run on the salt minion.
  445. ``/etc/salt/master``
  446. .. code-block:: yaml
  447. engines:
  448. - junos_syslog:
  449. port: xxx
  450. For junos_syslog engine to receive events, syslog must be set on the Junos device.
  451. This can be done via following configuration:
  452. .. code-block:: shell
  453. set system syslog host <ip-of-the-salt-device> port xxx any any
  454. .. toctree::
  455. :maxdepth: 2
  456. :glob: