Real-Time Kernel (RTOS)

All of the software within the embedded SDK is built upon a lean, high performance real-time kernel. The kernel provides components for effectively and efficiently utilizing the CPU.

The kernel provides the fundamental elements that help organize the event-driven and time-driven logic within any application. Rather than using traditional polling logic and complex state machines in a single super forever loop, the kernel gives the ability to have multiple threads (or tasks) that can be coded independent of each other. The threads allow for separation of independent tasks which reduces complexity and maximizes code readability. Threads can be assigned equal or different priorities and they have the ability to only execute when certain events have occurred, thus increasing CPU efficiency.

The kernel was designed specifically for the use with microcontrollers and utilizes a minimalist, yet sophisticated architecture. The kernel is provided with source code that is written in "C". The kernel has been developed in a modular fashion that allows for the inclusion or exclusion of each component which provides the ability to down-size the footprint when needed. The kernel also does not perform any internal memory allocation and thus most limitations (i.e. maximum number of threads) are imposed by the actual amount of RAM memory that is available upon the target controller.

Threads & Scheduling

A thread is a sequence of programmed instructions that can be scheduled and executed independently by the kernel. The threads provide an abstraction for concurrent execution that ultimately provides a method for multitasking. The kernel provides the ability to have multiple threads that can be coded independent of each other, can be assigned unique or equal priorities and have the ability to only execute when certain events have occurred.

The kernel uses a preemptive priority-based thread scheduler that provides the lowest possible latency to the application's highest priority tasks. The kernel performs round-robin scheduling on all equal priority threads. Each thread is given a maximum time-slice that specifies the maximum amount of time a thread can execute on the CPU before switching to the next equal priority thread.

Memory Management

Memory allocation is a challenge for real time embedded applications. Most dynamic allocation methods are not quick or more importantly, not deterministic. In order to not be forced into allocating the worst case, our software defers all memory allocation to the caller of its API's. This approach maintains a high level of configurability without the need for determining a worst case allocation for everything. Although dynamic allocation could be used exclusively, it is common and adequate for most applications to statically allocate the objects that exist through-out the entire life time of the application.

There are scenarios where dynamic memory allocation is preferred. Dynamically allocating objects provides the ability to pass around references (pointers) of the objects rather than making copies which can be much more efficient. The kernel provides two different types of dynamic allocation.

  • Memory Pools - Memory pools within the kernel provide a fast and deterministic method of allocating fixed sized blocks from a pool of memory. The time to allocate or free a block from a pool is fixed no matter the size or the number of blocks contained within the pool. A memory pool does not suffer from any type of fragmentation. Pools are the preferred method when dynamically allocating memory within an ISR.

  • Memory Heaps - A memory heap within the kernel provides a method of allocating variable length blocks of memory. While a heap supports any length of memory allocation; it suffers from the variability of fragmentation. The allocation of memory is not deterministic as it may have to search the heap for a free block that is large enough to satisfy the request. A memory heap is not to be used anywhere with strict timing requirements such as an ISR.

Data Management

The most basic method for handling data is to use a simple fixed-length array, but an array requires that the number of elements to be known ahead of time (maybe even at compile time). As code and applications scale, it can be difficult and cumbersome to build flexible and reusable code with the restrictions of fixed-length arrays. The kernel provides the following components for managing data.

  • Linked Lists - Linked lists provide the ability to link objects together into a single collection or list. A linked list has no limit on the number of objects it can hold as it simply keeps links (pointers) to the objects it contains. The linked lists are implemented as doubly linked lists which means they support traversal in either a forward (first-to-last) or backward (last-to-first) direction as each node within the list contains a link to both the previous and next node.

  • Queues - A queue is a first-in first-out collection of objects. Queues store their objects by reference (pointer), so removal and insertion is very fast and deterministic. Queues support insertion and removal from either threads or interrupt service routines (ISR) and they support blocking of threads when either full or empty.

  • Circular Buffers - A circular buffer provides byte-wide copy-in/copy-out storage of generic data. A buffer can be used to transfer data between threads or between a thread and an ISR. A buffer supports blocking of threads for reading and writing operations.

Synchronization

Since the kernel provides the ability to multi-task using multiple threads, it becomes necessary to synchronize access to shared resources. The kernel provides two components for thread synchronization purposes.

  • Mutexes - A mutex is a synchronization object that provides a mutually exclusive access to a resource. For example, to prevent two threads from writing to shared memory at the same time, each thread acquires ownership of a mutex before executing the code that accesses the memory. After writing to the shared memory, the thread releases the mutex.

    The mutexes support priority inheritance to eliminate the issue of priority inversion. When a thread with higher priority is blocked while waiting to acquire ownership of the mutex, the operating system will temporarily give the owner of the mutex the same priority of the blocked thread. Once the mutex has been released, the original owning thread will return back to it's original priority.

  • Semaphores - A counting semaphore is a synchronization object that limits the number of threads that can concurrently access a resource. Anytime that a thread successfully acquires the semaphore, its count is decremented by one. If the semaphore's count is zero when a thread attempts to acquire the semaphore, the calling thread will be blocked until the semaphore's count is incremented. Unlike the mutex, a semaphore does not have the notion of an owner thread. Without an owner, any thread can release a semaphore, even if it is different than the thread that originally acquired the semaphore.

  • Locks - The kernel provides two simple types of synchronization locks that work by disabling interrupts and/or the switching of threads. A thread lock can be used to protect against other threads by temporarily disabling thread switching. An interrupt lock can be used to protect against other threads and interrupts by temporarily disabling the maskable interrupts.

Timing

The kernel provides several methods for managing time-based events.

  • Timers - Timers provide a method for calling an application-defined function upon expiration. The timers can be configured to execute the application-defined expiration functions directly from the kernel tick interrupt or from dedicated kernel-level threads. The kernel supports having multiple kernel-level threads for processing timers so that your application-defined functions can make blocking calls without causing added latency for other timers.

  • Timestamps & Stopwatches - The kernel provides a method for acquiring a high-resolution timestamp for diagnostic purposes. The implementation of the timestamp is platform specific, but provides sub-microsecond resolution. The kernel also provides a stopwatch component that uses the timestamp functionality and is used for measuring elapsed time.

Interrupts

The kernel uses a unified interrupt architecture for simplicity and ease of debugging. The kernel temporarily disables interrupts while inside critical functions to protect internal data structures. These critical functions have been optimized to minimize the amount of time that interrupts are disabled. To help with determining overall interrupt latency of the system, the kernel includes code that measures the maximum amount of time that interrupts have been disabled.

Most of the components within the kernel provide support for being called from within interrupt service routines (ISR). Although the creation and destruction of components must all be made from a thread context, the memory pools, semaphores, timers, queues and signals all support some control from within interrupt service routines.