| Customize Help

Multi-threading



MIL also supports multi-threading. Multi-threading is the ability to perform multiple operations simultaneously in the same process. This is done by creating different threads (execution queues) to ensure sequential execution of operations within the same thread, while allowing simultaneous yet independent execution of operations in other threads.

Threads within a process share the same data. Individual threads can communicate with each other and exchange data such as MIL identifiers.

Multi-threading is most appropriate for applications where independent tasks can be done simultaneously but need to share data or to be controlled and synchronized within a main task.

Multi-threading does not always result in an increase of speed and efficiency. Threads running simultaneously on the same CPU share the same resources (such as memory). When using a machine with multiple CPUs under Windows, the threads generally run on separate CPUs and provide more processing power. However, since they share the same memory, operations that are I/O intensive and require only simple processing might not be accelerated.

For better performance, it is recommended to limit the interaction between multiple threads (for example, the synchronization between threads, and any shared resources). Furthermore, in a multi-threaded application, such as a multi-camera application, you will get better performance if the total number of threads, including the multi-processing threads, is equal to, or less than the number of cores. In such an application, you should either minimize core interaction or disable multi-core processing altogether. This can be done by using MthrControlMp() and limiting M_CORE_MAX for each thread, so that the sum of the value of M_CORE_MAX for all the threads is less than or equal to the number of cores on your computer. Alternatively, you can use MappControlMp() and disable multi-core use altogether, by setting M_MP_USE to M_DISABLE.

Most applications do not require the use of multiple threads since there are other ways of multi-tasking. Mechanisms such as asynchronous grab and call-back functions can be used (see MdigControl() and MdigHookFunction()). Applications resolved by alternative means are often simpler to implement and easier to maintain than multi-threaded applications.

MIL and multi-threading

When your application contains several independent processing tasks that can be performed in parallel, you can design it so that each part is controlled by a separate thread (or task).

Creating threads using MIL

Under multi-thread operating systems, you can create as many threads as you require. You can create threads using commands provided by the operating system, or using the MthrAlloc() function provided by MIL. The MthrAlloc() function is portable, which means that it can be called from user-defined MIL functions that are executed on systems with multiple processors. For information on executing functions on systems with on-board processors, see the Master/slave dynamics on a remote system section of Chapter 53: The MIL function development module.

There are two methods for creating threads using the MthrAlloc() function:

  • With the first method (M_THREAD), the MthrAlloc() function creates a MIL thread context for the new thread, and allows you to specify a pointer to a function that will be executed by the thread. This user-created function must include all operations, and call all of the functions that you consider as being part of one thread. When a thread contains a function call whose target processor is an on-board processor that supports multi-threading, MIL automatically creates a corresponding thread on that system's on-board processor. The functions of the Thread module allow you to synchronize threads running on the Host and/or various MIL systems.

  • With the second method, MthrAlloc() creates a selectable thread (M_SELECTABLE_THREAD). Selectable threads are threads executed on an on-board processor that supports multi-threading and can be controlled from a single corresponding thread on the Host. Use MthrControl() with M_THREAD_SELECT to send MIL functions to be executed by a selectable thread.

Every thread in a MIL application, including the main thread which initially called MappAlloc(), shares the same application context settings. Calling MappControl() in any thread will affect the application context settings for every thread, unless M_THREAD_CURRENT is added to the ControlType. When M_THREAD_CURRENT is added to the ControlType, the specified application context setting applies only to the thread in which the call was made.

When a thread changes an application context setting using M_THREAD_CURRENT, only the specified ControlType in the thread will be unique, and it will remain unique. For example, in a MIL application with three threads (Thread A, Thread B, and the main thread), all threads share the same application context settings, by default. The following calls to MappAlloc() demonstrate the shared and unique application context settings of this example.

  1. If Thread A calls MappControl() with M_ERROR + M_THREAD_CURRENT, then Thread A will have the new error setting, while Thread B and the main thread will still share the original default error setting.

  2. After that call, if either Thread B or the main thread calls MappControl() with M_ERROR, but without adding M_THREAD_CURRENT, Thread B and the main thread will change their error setting, while Thread A will remain with its unique setting.

  3. Finally, if after the first two calls any thread calls MappControl() with M_PARAMETER, but without M_THREAD_CURRENT, all three threads will share the new setting.

While application context settings are generally shared between all threads (except when made unique with M_THREAD_CURRENT), thread context settings, controlled using MthrControl(), are unique to each thread.

Thread execution

MIL functions in any thread are executed as follows:

  • If the target processor is the Host CPU, processing in each thread is determined by the operating system.

  • If the target processor is an on-board processor of a system that supports multi-threading, MIL automatically creates, and eventually terminates, an on-board thread for each thread that sends commands to the board.

Important
Since the creation of on-board threads is done automatically, you do not have to specify the system on which to create a specific thread. However, you can do so by creating selectable threads on a particular system.

Synchronization and mutex

Thread synchronization is generally done using the Host synchronization services (such as Windows event objects). However, when using a system with an on-board processor, the activities of this processor cannot be synchronized by the Host.

This means that threads continue execution without waiting for the execution of the on-board functions to complete. In most cases, this behavior is acceptable, since it leaves the Host available for other tasks. However, for operations that require sequential execution of functions to return valid results (for example, MbufGet() after an MdigGrab()), MIL automatically synchronizes the threads, forcing the Host to wait for completion of the earlier function(s).

Explicit synchronization of threads is necessary if functions sharing a common resource might conflict with each other. For example, if two threads sharing the same image buffer are not synchronized, and each thread tries to clear the buffer to a different value, these functions might execute at the same time and the buffer could be cleared to either value or even to a combination of both. Use the synchronization features of the MthrControl(), MthrWait(), and MthrWaitMultiple() functions to synchronize the flow of threads. MthrWait() forces the current thread to wait for the completion of the specified thread or the change of state of the specified MIL event. MthrWaitMultiple() forces the current thread to wait for a change in state in one or all of the MIL events identified in a user-supplied array of events.

MthrControl() allows you to use mutexes to synchronize the flow of threads. A mutual exclusion object allows threads to synchronize access to shared resources. Once a mutex is allocated on the specified system, you can lock and unlock critical sections of code. Locking a critical section of code ensures that no two threads can access the same data at the same time. To lock a MIL mutex, you must call MthrControl() with M_LOCK or M_LOCK_TRY immediately preceding the section of code to lock. If you lock a mutex, you must unlock it at the end of the critical section of code using MthrControl() with M_UNLOCK. Once you are finished using the mutex, free it using MthrFree(). The use of a mutex is illustrated in the following example with MimArith and MimFlip accessing the same data.

MthrControl() with M_LOCK and M_LOCK_TRY are very similar; the difference occurs when one attempts to lock a mutex that is already locked. MthrControl() with M_LOCK will block the thread, forcing it to wait for the mutex to become unlocked before executing the critical section of code. MthrControl() with M_LOCK_TRY will not wait for the mutex to unlock. If the mutex is currently locked, the thread will proceed to execute, skipping the critical section of code. Note that once you call M_LOCK_TRY, you should immediately call MthrInquire() with M_LOCK_TRY to inquire whether the mutex was locked.

Thread control

Windows operating systems are multi-process and multi-thread operating systems. They provide various thread control services, including events (used to synchronize threads).

The MthrAlloc() function serves as a link between MIL and the operating system; the function allows you to create an operating-system-independent MIL version of these services. Threads and events created using MthrAlloc() can be used in addition to, or instead of, the events created using commands of the operating system.

MthrControl() controls and coordinates both threads and events. The MthrWait() function synchronizes thread processing by forcing a "wait" state. The MthrInquire() function inquires about both the settings of a MIL thread context and the state of an event allocated using MthrAlloc(). MthrFree() frees the allocated MIL thread context or event.

The MthrAlloc() function allows you to specify a particular system on which to allocate MIL thread contexts or events. This permits you to synchronize the execution of functions on a specific MIL system.

When creating multiple threads on a multi-core computer, use MappControlMp() and MthrControlMp() to restrict the number of cores used by each thread. Restricting the number of cores used by each thread might be more efficient; see the Transparent multi-core use section earlier in this chapter for more information.

Using image processing and analysis contexts in multiple threads

Multiple threads cannot share an image processing or analysis context, but you can duplicate the context for use across more than one thread. For all the image processing and analysis modules, use their M...Stream() function to duplicate their context. Using M...Stream() is a faster and more convenient way of duplicating contexts than with M...Save() and M...Restore(). The basic steps to duplicate a context are:

  1. Call M...Stream() with M_INQUIRE_SIZE_BYTE to return the number of bytes needed to store the context.

  2. Allocate some memory of the same size as was established in step 1.

  3. Call M...Stream() with M_SAVE and M_MEMORY to save the context to the allocated memory.

  4. Call M...Stream() with M_LOAD or M_RESTORE to create a duplicate context.

At the end of each application, all the duplicated contexts, including the original context, must be freed using M...Free().

Thread-safety in MIL

To avoid race conditions, it is important to know the guarantees that MIL provides with regards to thread-safety. There are multiple approaches to thread-safety and MIL implements a few of these mechanisms through support for re-entrancy and the mutual exclusion (M_MUTEX) mechanism.

Re-entrancy is an approach to thread-safety which focuses on avoiding shared state conditions. With regards to MIL, it means that a MIL function can be safely called concurrently from multiple threads, as long as the data passed to the function's parameters (for example, image buffers) is not shared between threads. Unless otherwise specified, all MIL functions are considered to be re-entrant.

You can use the second approach to thread-safety which deals with synchronization in situations where shared states cannot be avoided. The mutual exclusion mechanism allows you to serialize access to shared data such as a MIL object. This means that only one thread can read from and/or write to the shared data at a time. The mutual exclusion mechanism is implemented with a platform independent mechanism using the M_MUTEX object. In MIL, unless otherwise stated, functions that access or modify a MIL object cannot be called concurrently on the same object in multiple threads without using the mutual exclusion mechanism (M_MUTEX). For more information, see the Synchronization and mutex subsection of this section.

Error reporting

Some functions in MIL are asynchronous, that is, they queue their command to the hardware and then immediately return control to the Host. Errors are logged once the function is executed on the processor of the specified system, and are reported immediately after being logged.

To check for errors, use the MappGetError() function. In multi-thread environments, a call to MappGetError() returns the last error that occurred in the current thread or, if none, checks for errors in other threads running MIL. To return only errors occurring in the current thread, add M_THREAD_CURRENT to the ErrorType parameter (M_CURRENT + M_THREAD_CURRENT).