[RFC,v2,22/26] doc: update references to master/slave lcore in samples

Message ID 20200605225811.26342-23-stephen@networkplumber.org (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series Change references to master/slave to |

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/checkpatch success coding style OK

Commit Message

Stephen Hemminger June 5, 2020, 10:58 p.m. UTC
  New terms are initial and worker lcores.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/sample_app_ug/hello_world.rst        | 2 +-
 doc/guides/sample_app_ug/ioat.rst               | 2 +-
 doc/guides/sample_app_ug/l3_forward_graph.rst   | 6 +++---
 doc/guides/sample_app_ug/multi_process.rst      | 4 ++--
 doc/guides/sample_app_ug/performance_thread.rst | 2 +-
 doc/guides/sample_app_ug/qos_scheduler.rst      | 2 +-
 doc/guides/sample_app_ug/timer.rst              | 4 ++--
 7 files changed, 11 insertions(+), 11 deletions(-)
  

Patch

diff --git a/doc/guides/sample_app_ug/hello_world.rst b/doc/guides/sample_app_ug/hello_world.rst
index 46f997a7dce3..8fbcc1898215 100644
--- a/doc/guides/sample_app_ug/hello_world.rst
+++ b/doc/guides/sample_app_ug/hello_world.rst
@@ -75,7 +75,7 @@  The code that launches the function on each lcore is as follows:
 
 .. code-block:: c
 
-    /* call lcore_hello() on every slave lcore */
+    /* call lcore_hello() on every worker lcore */
 
     RTE_LCORE_FOREACH_SLAVE(lcore_id) {
        rte_eal_remote_launch(lcore_hello, NULL, lcore_id);
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index bab7654b8d4d..9fb2f4e30b71 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -208,7 +208,7 @@  After that each port application assigns resources needed.
     cfg.nb_lcores = rte_lcore_count() - 1;
     if (cfg.nb_lcores < 1)
         rte_exit(EXIT_FAILURE,
-            "There should be at least one slave lcore.\n");
+            "There should be at least one worker lcore.\n");
 
     ret = 0;
 
diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
index df50827bab86..4ac96fc0c2f7 100644
--- a/doc/guides/sample_app_ug/l3_forward_graph.rst
+++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
@@ -22,7 +22,7 @@  Run-time path is main thing that differs from L3 forwarding sample application.
 Difference is that forwarding logic starting from Rx, followed by LPM lookup,
 TTL update and finally Tx is implemented inside graph nodes. These nodes are
 interconnected in graph framework. Application main loop needs to walk over
-graph using ``rte_graph_walk()`` with graph objects created one per slave lcore.
+graph using ``rte_graph_walk()`` with graph objects created one per worker lcore.
 
 The lookup method is as per implementation of ``ip4_lookup`` graph node.
 The ID of the output interface for the input packet is the next hop returned by
@@ -265,7 +265,7 @@  headers will be provided run-time using ``rte_node_ip4_route_add()`` and
     Since currently ``ip4_lookup`` and ``ip4_rewrite`` nodes don't support
     lock-less mechanisms(RCU, etc) to add run-time forwarding data like route and
     rewrite data, forwarding data is added before packet processing loop is
-    launched on slave lcore.
+    launched on worker lcore.
 
 .. code-block:: c
 
@@ -297,7 +297,7 @@  Packet Forwarding using Graph Walk
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Now that all the device configurations are done, graph creations are done and
-forwarding data is updated with nodes, slave lcores will be launched with graph
+forwarding data is updated with nodes, worker lcores will be launched with graph
 main loop. Graph main loop is very simple in the sense that it needs to
 continuously call a non-blocking API ``rte_graph_walk()`` with it's lcore
 specific graph object that was already created.
diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index f2a79a639763..017819a1a76f 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -66,7 +66,7 @@  The process should start successfully and display a command prompt as follows:
 
     EAL: check igb_uio module
     EAL: check module finished
-    EAL: Master core 0 is ready (tid=54e41820)
+    EAL: Initial core 0 is ready (tid=54e41820)
     EAL: Core 1 is ready (tid=53b32700)
 
     Starting core 1
@@ -92,7 +92,7 @@  At any stage, either process can be terminated using the quit command.
 
 .. code-block:: console
 
-   EAL: Master core 10 is ready (tid=b5f89820)           EAL: Master core 8 is ready (tid=864a3820)
+   EAL: Initial core 10 is ready (tid=b5f89820)           EAL: Master core 8 is ready (tid=864a3820)
    EAL: Core 11 is ready (tid=84ffe700)                  EAL: Core 9 is ready (tid=85995700)
    Starting core 11                                      Starting core 9
    simple_mp > send hello_secondary                      simple_mp > core 9: Received 'hello_secondary'
diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
index b04d0ba444af..f694f3dfc998 100644
--- a/doc/guides/sample_app_ug/performance_thread.rst
+++ b/doc/guides/sample_app_ug/performance_thread.rst
@@ -1217,5 +1217,5 @@  Setting ``LTHREAD_DIAG`` also enables counting of statistics about cache and
 queue usage, and these statistics can be displayed by calling the function
 ``lthread_diag_stats_display()``. This function also performs a consistency
 check on the caches and queues. The function should only be called from the
-master EAL thread after all slave threads have stopped and returned to the C
+master EAL thread after all worker threads have stopped and returned to the C
 main program, otherwise the consistency check will fail.
diff --git a/doc/guides/sample_app_ug/qos_scheduler.rst b/doc/guides/sample_app_ug/qos_scheduler.rst
index b5010657a7d8..3258f08358d1 100644
--- a/doc/guides/sample_app_ug/qos_scheduler.rst
+++ b/doc/guides/sample_app_ug/qos_scheduler.rst
@@ -71,7 +71,7 @@  Optional application parameters include:
     In this mode, the application shows a command line that can be used for obtaining statistics while
     scheduling is taking place (see interactive mode below for more information).
 
-*   --mst n: Master core index (the default value is 1).
+*   --mst n: Initial core index (the default value is 1).
 
 *   --rsz "A, B, C": Ring sizes:
 
diff --git a/doc/guides/sample_app_ug/timer.rst b/doc/guides/sample_app_ug/timer.rst
index 98d762d2388c..ff6f6581bd54 100644
--- a/doc/guides/sample_app_ug/timer.rst
+++ b/doc/guides/sample_app_ug/timer.rst
@@ -49,11 +49,11 @@  In addition to EAL initialization, the timer subsystem must be initialized, by c
     rte_timer_subsystem_init();
 
 After timer creation (see the next paragraph),
-the main loop is executed on each slave lcore using the well-known rte_eal_remote_launch() and also on the master.
+the main loop is executed on each worker lcore using the well-known rte_eal_remote_launch() and also on the master.
 
 .. code-block:: c
 
-    /* call lcore_mainloop() on every slave lcore  */
+    /* call lcore_mainloop() on every worker lcore  */
 
     RTE_LCORE_FOREACH_SLAVE(lcore_id) {
         rte_eal_remote_launch(lcore_mainloop, NULL, lcore_id);