ThreadX User Guide

  • Upload
    trungvb

  • View
    439

  • Download
    5

Embed Size (px)

Citation preview

  • 8/21/2019 ThreadX User Guide

    1/361

    the high-performance embedded kernel

    User Guide

    Express Logic, Inc.858.613.6640

    Toll Free 888.THREADX

    FAX 858.521.4259

    http://www.expresslogic.com

    Version 5.0

  • 8/21/2019 ThreadX User Guide

    2/361

    Express Logic, Inc.

    1997-2006 by Express Logic, Inc.

    All rights reserved. This document and the associated ThreadX software are the sole property of

    Express Logic, Inc. Each contains proprietary information of Express Logic, Inc. Reproduction or

    duplication by any means of any portion of this document without the prior written consent of Express

    Logic, Inc. is expressly forbidden.

    Express Logic, Inc. reserves the right to make changes to the specifications described herein at any

    time and without notice in order to improve design or reliability of ThreadX. The information in this

    document has been carefully checked for accuracy; however, Express Logic, Inc. makes no warranty

    pertaining to the correctness of this document.

    Trademarks

    ThreadX is a registered trademark of Express Logic, Inc., andpicokernel,preemption-threshold, and

    event-chainingare trademarks of Express Logic, Inc.

    All other product and company names are trademarks or registered trademarks of their respective

    holders.

    Warranty Limitations

    Express Logic, Inc. makes no warranty of any kind that the ThreadX products will meet the USERs

    requirements, or will operate in the manner specified by the USER, or that the operation of the

    ThreadX products will operate uninterrupted or error free, or that any defects that may exist in the

    ThreadX products will be corrected after the warranty period. Express Logic, Inc. makes no warranties

    of any kind, either expressed or implied, including but not limited to the implied warranties of

    merchantability and fitness for a particular purpose, with respect to the ThreadX products. No oral or

    written information or advice given by Express Logic, Inc., its dealers, distributors, agents, or

    employees shall create any other warranty or in any way increase the scope of this warranty, and

    licensee may not rely on any such information or advice.

    Part Number: 000-1001

    Revision 5.0

  • 8/21/2019 ThreadX User Guide

    3/361

    User Guide

    ContentsAbout This Guide 13

    1 Organization 13

    1 Guide Conventions 14

    1 ThreadX Data Types 15

    1 Customer Support Center 16

    Latest Product Information 16What We Need From You 16Where to Send Comments About This Guide 17

    1 Introduction to ThreadX 19

    1 ThreadX Unique Features 20

    picokernel Architecture 20

    ANSI C Source Code 20Advanced Technology 20Not A Black Box 21The RTOS Standard 22

    1 Embedded Applications 22

    Real-time Software 22Multitasking 22Tasks vs. Threads 23

    1 ThreadX Benefits 23

    Improved Responsiveness 24

    Software Maintenance 24Increased Throughput 24Processor Isolation 25Dividing the Application 25Ease of Use 26Improve

  • 8/21/2019 ThreadX User Guide

    4/361

    4 ThreadX

    User Guide

    Time-to-market 26Protecting the Software Investment 26

    2 Installation and Use of ThreadX 271 Host Considerations 28

    1 Target Considerations 28

    1 Product Distribution 29

    1 ThreadX Installation 30

    1 Using ThreadX 31

    1 Small Example System 32

    1 Troubleshooting 34

    1 Configuration Options 34

    1 ThreadX Version ID 40

    3 Functional Components of ThreadX 41

    1 Execution Overview 44

    Initialization 44Thread Execution 44Interrupt Service Routines (ISR) 44

    Initialization 45Application Timers 46

    1 Memory Usage 46

    Static Memory Usage 46Dynamic Memory Usage 48

    1 Initialization 48

    System Reset Vector 48Development Tool Initialization 49main Function 49tx_kernel_enter 49

    Application Definition Function 50Interrupts 50

    1 Thread Execution 50

    Thread Execution States 52Thread Entry/Exit Notification 54

    http://-/?-http://-/?-
  • 8/21/2019 ThreadX User Guide

    5/361

    Contents 5

    Express Logic, Inc.

    Thread Priorities 54Thread Scheduling 55Round-robin Scheduling 55Time-Slicing 55

    Preemption 56Preemption-Threshold 56Priority Inheritance 57Thread Creation 57Thread Control Block TX_THREAD 57Currently Executing Thread 59Thread Stack Area 59Memory Pitfalls 62Optional Run-time Stack Checking 62Reentrancy 62Thread Priority Pitfalls 63Priority Overhead 64Run-time Thread Performance Information 65Debugging Pitfalls 66

    1 Message Queues 67

    Creating Message Queues 68Message Size 68Message Queue Capacity 68Queue Memory Area 69Thread Suspension 69Queue Send Notification 70

    Queue Event-chaining 70Run-time Queue Performance Information 71Queue Control Block TX_QUEUE 72Message Destination Pitfall 72

    1 Counting Semaphores 72

    Mutual Exclusion 73Event Notification 73Creating Counting Semaphores 74Thread Suspension 74Semaphore Put Notification 74

    Semaphore Event-chaining 75Run-time Semaphore Performance Information 75Semaphore Control Block TX_SEMAPHORE 76Deadly Embrace 76Priority Inversion 78

  • 8/21/2019 ThreadX User Guide

    6/361

    6 ThreadX

    User Guide

    1 Mutexes 78

    Mutex Mutual Exclusion 79Creating Mutexes 79Thread Suspension 79

    Run-time Mutex Performance Information 80Mutex Control Block TX_MUTEX 81Deadly Embrace 81Priority Inversion 81

    1 Event Flags 82

    Creating Event Flags Groups 83Thread Suspension 83Event Flags Set Notification 83Event Flags Event-chaining 84Run-time Event Flags Performance Information 84

    Event Flags Group Control BlockTX_EVENT_FLAGS_GROUP 85

    1 Memory Block Pools 85

    Creating Memory Block Pools 86Memory Block Size 86Pool Capacity 86Pools Memory Area 87Thread Suspension 87Run-time Block Pool Performance Information 87Memory Block Pool Control Block TX_BLOCK_POOL 88

    Overwriting Memory Blocks 891 Memory Byte Pools 89

    Creating Memory Byte Pools 89Pool Capacity 90Pools Memory Area 90Thread Suspension 90Run-time Byte Pool Performance Information 91Memory Byte Pool Control Block TX_BYTE_POOL 92Un-deterministic Behavior 92Overwriting Memory Blocks 93

    1Application Timers 93Timer Intervals 93Timer Accuracy 94Timer Execution 94Creating Application Timers 94Run-time Application Timer Performance Information 95

  • 8/21/2019 ThreadX User Guide

    7/361

    Contents 7

    Express Logic, Inc.

    Application Timer Control Block TX_TIMER 95Excessive Timers 96

    1 Relative Time 96

    1

    Interrupts 96Interrupt Control 97ThreadX Managed Interrupts 97ISR Template 99High-frequency Interrupts 100Interrupt Latency 100

    4 Description of ThreadX Services 101

    5 Device Drivers for ThreadX 2951 Device Driver Introduction 296

    1 Driver Functions 296

    Driver Initialization 297Driver Control 297Driver Access 297Driver Input 297Driver Output 298Driver Interrupts 298

    Driver Status 298Driver Termination 298

    1 Simple Driver Example 298

    Simple Driver Initialization 299Simple Driver Input 300Simple Driver Output 301Simple Driver Shortcomings 302

    1Advanced Driver Issues 303

    I/O Buffering 303Circular Byte Buffers 303

    Circular Buffer Input 303Circular Output Buffer 305Buffer I/O Management 306TX_IO_BUFFER 306Buffered I/O Advantage 307Buffered Driver Responsibilities 307

  • 8/21/2019 ThreadX User Guide

    8/361

    8 ThreadX

    User Guide

    Interrupt Management 309Thread Suspension 309

    6 Demonstration System for ThreadX 3111 Overview 312

    1Application Define 312

    Initial Execution 313

    1 Thread 0 314

    1 Thread 1 314

    1 Thread 2 314

    1 Threads 3 and 4 315

    1 Thread 5 315

    1 Threads 6 and 7 316

    1 Observing the Demonstration 316

    1 Distribution file: demo_threadx.c 317

    A ThreadX API Services 323Entry Function 324Block Memory Services 324

    Byte Memory Services 324Event Flags Services 325Interrupt Control 325Mutex Services 325Queue Services 326Semaphore Services 326Thread Control Services 327Time Services 328Timer Services 328

    B ThreadX Constants 329Alphabetic Listings 330Listing by Value 332

  • 8/21/2019 ThreadX User Guide

    9/361

    Contents 9

    Express Logic, Inc.

    C ThreadX Data Types 335

    1 TX_BLOCK_POOL 336

    1 TX_BYTE_POOL 336

    1 TX_EVENT_FLAGS_GROUP 337

    1 TX_MUTEX 337

    1 TX_QUEUE 338

    1 TX_SEMAPHORE 339

    1 TX_THREAD 339

    1 TX_TIMER 341

    1 TX_TIMER_INTERNAL 341

    D ASCII Character Codes 343

    1ASCII Character Codes in HEX 344

    Index 345

    http://-/?-http://-/?-
  • 8/21/2019 ThreadX User Guide

    10/361

    10 ThreadX

    User Guide

  • 8/21/2019 ThreadX User Guide

    11/361

    User Guide

    FiguresFigure 1 Template for Application Development 33

    Figure 2 Types of Program Execution 45

    Figure 3 Memory Area Example 47

    Figure 4 Initialization Process 51

    Figure 5 Thread State Transition 52

    Figure 6 Typical Thread Stack 60

    Figure 7 Stack Preset to 0xEFEF 61

    Figure 8 Example of Suspended Threads 77

    Figure 9 Simple Driver Initialization 300

    Figure 10 Simple Driver Input 301

    Figure 11 Simple Driver Output 302

    Figure 12 Logic for Circular Input Buffer 304

    Figure 13 Logic for Circular Output Buffer 305Figure 14 I/O Buffer 306

    Figure 15 Input-Output Lists 308

  • 8/21/2019 ThreadX User Guide

    12/361

    12 ThreadX

    User Guide

  • 8/21/2019 ThreadX User Guide

    13/361

    User Guide

    About This GuideThis guide provides comprehensive informationabout ThreadX, the high-performance real-timekernel from Express Logic, Inc.

    It is intended for the embedded real-time software

    developer. The developer should be familiar with

    standard real-time operating system functions and

    the C programming language.

    Organization

    Chapter 1 Provides a basic overview ofThreadX and its relationship toreal-time embeddeddevelopment.

    Chapter 2 Gives the basic steps to install

    and use ThreadX in yourapplication right out of the box.

    Chapter 3 Describes in detail the functionaloperation of ThreadX, the high-performance real-time kernel.

    Chapter 4 Details the applicationsinterface to ThreadX.

    Chapter 5 Describes writing I/O drivers forThreadX applications.

    Chapter 6 Describes the demonstrationapplication that is supplied withevery ThreadX processorsupport package.

  • 8/21/2019 ThreadX User Guide

    14/361

    14 ThreadX

    User Guide

    Appendix A ThreadX API

    Appendix B ThreadX constants

    Appendix C ThreadX data types

    Appendix D ASCII chart

    Index Topic cross reference

    Guide Conventions

    Italics typeface denotes book titles,emphasizes important words,

    and indicates variables.

    Boldface typeface denotes file names,key words, and furtheremphasizes important wordsand variables.

    Information symbols drawattention to important oradditional information that couldaffect performance or function.

    Warning symbols draw attentionto situations in which developersshould take care to avoidbecause they could cause fatalerrors.

    i

    !

  • 8/21/2019 ThreadX User Guide

    15/361

    About This Guide 15

    Express Logic, Inc.

    ThreadX Data Types

    In addition to the custom ThreadX control structuredata types, there are a series of special data types

    that are used in ThreadX service call interfaces.These special data types map directly to data typesof the underlying C compiler. This is done to insureportability between different C compilers. The exactimplementation can be found in the tx_port.hfileincluded on the distribution disk.

    The following is a list of ThreadX service call data

    types and their associated meanings:

    UINT Basic unsigned integer. Thistype must support 8-bit unsigneddata; however, it is mapped tothe most convenient unsigneddata type.

    ULONG Unsigned long type. This typemust support 32-bit unsigneddata.

    VOID Almost always equivalent to thecompilers void type.

    CHAR Most often a standard 8-bitcharacter type.

    Additional data types are used within the ThreadX

    source. They are also located in the tx_port.hfile.

  • 8/21/2019 ThreadX User Guide

    16/361

    16 ThreadX

    User Guide

    Customer Support Center

    Latest ProductInformation

    Visit the Express Logic web site and select the

    Support menu option to find the latest online

    support information, including information about thelatest ThreadX product releases.

    What We NeedFrom You

    Please supply us with the following information in an

    email message so we can more efficiently resolve

    your support request:

    1. A detailed description of the problem, including

    frequency of occurrence and whether it can be

    reliably reproduced.

    2. A detailed description of any changes to the

    application and/or ThreadX that preceded the

    problem.

    3. The contents of the_tx_version_idstring found

    in the tx_port.hfile of your distribution. This string

    will provide us valuable information regarding

    your run-time environment.

    4. The contents in RAM of the_tx_build_options

    ULONG variable. This variable will give usinformation on how your ThreadX library was built.

    Support engineers 858.613.6640

    Support fax 858.521.4259

    Support email [email protected]

    Web page http://www.expresslogic.com

  • 8/21/2019 ThreadX User Guide

    17/361

    About This Guide 17

    Express Logic, Inc.

    Where to SendComments AboutThis Guide

    The staff at Express Logic is always striving toprovide you with better products. To help us achievethis goal, email any comments and suggestions tothe Customer Support Center at

    [email protected]

    Enter ThreadX User Guide in the subject line.

  • 8/21/2019 ThreadX User Guide

    18/361

    18 ThreadX

    User Guide

  • 8/21/2019 ThreadX User Guide

    19/361

    User Guide

    C H A P T E R 1

    Introduction to ThreadX

    ThreadX is a high-performance real-time kerneldesigned specifically for embedded applications. Thischapter contains an introduction to the product and adescription of its applications and benefits.

    1 ThreadX Unique Features 20

    picokernel Architecture 20

    ANSI C Source Code 20Advanced Technology 20Not A Black Box 21The RTOS Standard 22

    1 Embedded Applications 22

    Real-time Software 22Multitasking 22Tasks vs. Threads 23

    1 ThreadX Benefits 23

    Improved Responsiveness 24

    Software Maintenance 24Increased Throughput 24Processor Isolation 25Dividing the Application 25Ease of Use 26Improve Time-to-market 26Protecting the Software Investment 26

  • 8/21/2019 ThreadX User Guide

    20/361

    20 Introduction to ThreadX

    User Guide

    ThreadX Unique Features

    Unlike other real-time kernels, ThreadX is designedto be versatileeasily scaling among small micro-

    controller-based applications through those that usepowerful CISC, RISC, and DSP processors.

    ThreadX is scalable based on its underlyingarchitecture. Because ThreadX services areimplemented as a C library, only those servicesactually used by the application are brought into therun-time image. Hence, the actual size of ThreadX iscompletely determined by the application. For mostapplications, the instruction image of ThreadX

    ranges between 2 KBytes and 15 KBytes in size.

    picokernelArchitecture

    Instead of layering kernel functions on top of eachother like traditional microkernelarchitectures,ThreadX services plug directly into its core. Thisresults in the fastest possible context switching andservice call performance. We call this non-layeringdesign apicokernelarchitecture.

    ANSI C SourceCode

    ThreadX is written primarily in ANSI C. A smallamount of assembly language is needed to tailor thekernel to the underlying target processor. This designmakes it possible to port ThreadX to a new processorfamily in a very short timeusually within weeks!

    AdvancedTechnology

    The following are highlights of the ThreadXadvanced technology:

    Simplepicokernel architecture

    Automatic scaling (small footprint)

    Deterministic processing

    Fast real-time performance

  • 8/21/2019 ThreadX User Guide

    21/361

    ThreadX Unique Features 21

    Express Logic, Inc.

    Preemptive and cooperative scheduling

    Flexible thread priority support (32-1024)

    Dynamic system object creation

    Unlimited number of system objects Optimized interrupt handling

    Preemption-threshold

    Priority inheritance

    Event-chaining

    Fast software timers

    Run-time memory management

    Run-time performance monitoring

    Run-time stack analysis

    Built-in system trace

    Vast processor support

    Vast development tool support

    Completely endian neutral

    Not A Black Box Most distributions of ThreadX include the complete Csource code as well as the processor-specificassembly language. This eliminates the black-box

    problems that occur with many commercial kernels.With ThreadX, application developers can seeexactly what the kernel is doingthere are nomysteries!

    The source code also allows for application specificmodifications. Although not recommended, it iscertainly beneficial to have the ability to modify thekernel if it is absolutely required.

    These features are especially comforting todevelopers accustomed to working with their ownin-house kernels. They expect to have source code andthe ability to modify the kernel. ThreadX is theultimate kernel for such developers.

  • 8/21/2019 ThreadX User Guide

    22/361

    22 Introduction to ThreadX

    User Guide

    The RTOSStandard

    Because of its versatility, high-performancepicokernelarchitecture, advanced technology, anddemonstrated portability, ThreadX is deployed inmore than 300,000,000 devices today. This

    effectively makes ThreadX the RTOS standard fordeeply embedded applications.

    Embedded Applications

    Embedded applications execute on microprocessorsburied within products such as wirelesscommunication devices, automobile engines, laserprinters, medical devices, etc. Another distinction ofembedded applications is that their software andhardware have a dedicated purpose.

    Real-time Software When time constraints are imposed on theapplication software, it is called the real-timesoftware. Basically, software that must perform itsprocessing within an exact period of time is calledreal-timesoftware. Embedded applications arealmost always real-time because of their inherent

    interaction with external events.

    Multitasking As mentioned, embedded applications have adedicated purpose. To fulfill this purpose, thesoftware must perform a variety of tasks. A task is asemi-independent portion of the application thatcarries out a specific duty. It is also the case thatsome tasks are more important than others. One ofthe major difficulties in an embedded application is

    the allocation of the processor between the variousapplication tasks. This allocation of processingbetween competing tasks is the primary purpose ofThreadX.

  • 8/21/2019 ThreadX User Guide

    23/361

    ThreadX Benefits 23

    Express Logic, Inc.

    Tasks vs. Threads Another distinction about tasks must be made. Theterm task is used in a variety of ways. It sometimesmeans a separately loadable program. In otherinstances, it may refer to an internal program

    segment.

    In contemporary operating system discussion, thereare two terms that more or less replace the use oftask:processand thread. Aprocessis a completelyindependent program that has its own addressspace, while a threadis a semi-independent programsegment that executes within a process. Threadsshare the same process address space. Theoverhead associated with thread management isminimal.

    Most embedded applications cannot afford theoverhead (both memory and performance)associated with a full-blown process-orientedoperating system. In addition, smallermicroprocessors dont have the hardwarearchitecture to support a true process-orientedoperating system. For these reasons, ThreadXimplements a thread model, which is both extremelyefficient and practical for most real-time embedded

    applications.

    To avoid confusion, ThreadX does not use the termtask. Instead, the more descriptive and contemporaryname threadis used.

    ThreadX Benefits

    Using ThreadX provides many benefits to embeddedapplications. Of course, the primary benefit rests inhow embedded application threads are allocatedprocessing time.

  • 8/21/2019 ThreadX User Guide

    24/361

    24 Introduction to ThreadX

    User Guide

    ImprovedResponsiveness

    Prior to real-time kernels like ThreadX, mostembedded applications allocated processing timewith a simple control loop, usually from within the Cmainfunction. This approach is still used in very

    small or simple applications. However, in large orcomplex applications, it is not practical because theresponse time to any event is a function of the worst-case processing time of one pass through the controlloop.

    Making matters worse, the timing characteristics ofthe application change whenever modifications aremade to the control loop. This makes the applicationinherently unstable and difficult to maintain andimprove on.

    ThreadX provides fast and deterministic responsetimes to important external events. ThreadXaccomplishes this through its preemptive, priority-based scheduling algorithm, which allows a higher-priority thread to preempt an executing lower-prioritythread. As a result, the worst-case response timeapproaches the time required to perform a contextswitch. This is not only deterministic, but it is alsoextremely fast.

    SoftwareMaintenance

    The ThreadX kernel enables application developersto concentrate on specific requirements of theirapplication threads without having to worry aboutchanging the timing of other areas of the application.This feature also makes it much easier to repair orenhance an application that utilizes ThreadX.

    IncreasedThroughput

    A possible work-around to the control loop responsetime problem is to add more polling. This improvesthe responsiveness, but it still doesnt guarantee aconstant worst-case response time and does nothingto enhance future modification of the application.

    Also, the processor is now performing even more

  • 8/21/2019 ThreadX User Guide

    25/361

    ThreadX Benefits 25

    Express Logic, Inc.

    unnecessary processing because of the extra polling.All of this unnecessary processing reduces theoverall throughput of the system.

    An interesting point regarding overhead is that manydevelopers assume that multithreaded environmentslike ThreadX increase overhead and have a negativeimpact on total system throughput. But in somecases, multithreading actually reduces overhead byeliminating all of the redundant polling that occurs incontrol loop environments. The overhead associatedwith multithreaded kernels is typically a function ofthe time required for context switching. If the contextswitch time is less than the polling process, ThreadXprovides a solution with the potential of lessoverhead and more throughput. This makes ThreadXan obvious choice for applications that have anydegree of complexity or size.

    ProcessorIsolation

    ThreadX provides a robust processor-independentinterface between the application and the underlyingprocessor. This allows developers to concentrate onthe application rather than spending a significantamount of time learning hardware details.

    Dividing theApplication

    In control loop-based applications, each developermust have an intimate knowledge of the entireapplications run-time behavior and requirements.This is because the processor allocation logic isdispersed throughout the entire application. As anapplication increases in size or complexity, itbecomes impossible for all developers to rememberthe precise processing requirements of the entire

    application.

    ThreadX frees each developer from the worriesassociated with processor allocation and allows themto concentrate on their specific piece of theembedded application. In addition, ThreadX forces

  • 8/21/2019 ThreadX User Guide

    26/361

    26 Introduction to ThreadX

    User Guide

    the application to be divided into clearly definedthreads. By itself, this division of the application intothreads makes development much simpler.

    Ease of Use ThreadX is designed with the application developerin mind. The ThreadX architecture and service callinterface are designed to be easily understood. As aresult, ThreadX developers can quickly use itsadvanced features.

    ImproveTime-to-market

    All of the benefits of ThreadX accelerate the softwaredevelopment process. ThreadX takes care of most

    processor issues, thereby removing this effort fromthe development schedule. All of this results in afaster time to market!

    Protecting theSoftwareInvestment

    Because of its architecture, ThreadX is easily portedto new processor and/or development toolenvironments. This, coupled with the fact thatThreadX insulates applications from details of theunderlying processors, makes ThreadX applications

    highly portable. As a result, the applicationsmigration path is guaranteed, and the originaldevelopment investment is protected.

  • 8/21/2019 ThreadX User Guide

    27/361

    User Guide

    C H A P T E R 2

    Installation and Use of ThreadX

    This chapter contains a description of various issuesrelated to installation, setup, and usage of the high-performance ThreadX kernel.

    1 Host Considerations 28

    1 Target Considerations 28

    1

    Product Distribution 291 ThreadX Installation 30

    1 Using ThreadX 31

    1 Small Example System 32

    1 Troubleshooting 34

    1 Configuration Options 34

    1 ThreadX Version ID 40

  • 8/21/2019 ThreadX User Guide

    28/361

    28 Installation and Use of ThreadX

    User Guide

    Host Considerations

    Embedded software is usually developed onWindows or Linux (Unix) host computers. After the

    application is compiled, linked, and located on thehost, it is downloaded to the target hardware forexecution.

    Usually the target download is done from within thedevelopment tool debugger. After download, thedebugger is responsible for providing targetexecution control (go, halt, breakpoint, etc.) as wellas access to memory and processor registers.

    Most development tool debuggers communicate withthe target hardware via on-chip debug (OCD)connections such as JTAG (IEEE 1149.1) andBackground Debug Mode (BDM). Debuggers alsocommunicate with target hardware through In-CircuitEmulation (ICE) connections. Both OCD and ICEconnections provide robust solutions with minimalintrusion on the target resident software.

    As for resources used on the host, the source codefor ThreadX is delivered in ASCII format and requires

    approximately 1 MBytes of space on the hostcomputers hard disk.

    Please review the supplied readme_threadx.txt file

    for additional host system considerations and

    options.

    Target Considerations

    ThreadX requires between 2 KBytes and 20 KBytesof Read Only Memory (ROM) on the target. Another1 to 2 KBytes of the targets Random AccessMemory (RAM) are required for the ThreadX systemstack and other global data structures.

    i

  • 8/21/2019 ThreadX User Guide

    29/361

    Product Distribution 29

    Express Logic, Inc.

    For timer-related functions like service call time-outs,time-slicing, and application timers to function, theunderlying target hardware must provide a periodicinterrupt source. If the processor has this capability, it

    is utilized by ThreadX. Otherwise, if the targetprocessor does not have the ability to generate aperiodic interrupt, the users hardware must provideit. Setup and configuration of the timer interrupt istypically located in thetx_initialize_low_levelassembly file in the ThreadX distribution.

    ThreadX is still functional even if no periodic timer

    interrupt source is available. However, none of the

    timer-related services are functional. Please review

    the supplied readme_threadx.txt file for any

    additional host system considerations and/or options.

    Product Distribution

    ThreadX is shipped on a single CD-ROM. Two typesof ThreadX packages are availablestandardand

    premium. The standardpackage includes minimalsource code; while thepremiumpackage contains

    complete ThreadX source code.

    The exact content of the distribution disk depends onthe target processor, development tools, and theThreadX package purchased. However, the followingis a list of several important files that are common tomost product distributions:

    readme_threadx.txt

    Text file containing specificinformation about the ThreadX

    port, including information aboutthe target processor and thedevelopment tools.

    i

  • 8/21/2019 ThreadX User Guide

    30/361

    30 Installation and Use of ThreadX

    User Guide

    tx_api.h C header file containing allsystem equates, data structures,and service prototypes.

    tx_port.h C header file containing alldevelopment-tool and target-specific data definitions andstructures.

    demo_threadx.c C file containing a small demoapplication.

    tx.a (or tx.lib) Binary version of the ThreadX Clibrary that is distributed with thestandardpackage.

    All file names are in lower-case. This naming

    convention makes it easier to convert the commands

    to Linux (Unix) development platforms.

    ThreadX Installation

    Installation of ThreadX is straightforward. The

    following instructions apply to virtually anyinstallation. However, examine thereadme_threadx.txtfile for changes specific to theactual development tool environment.

    Backup the ThreadX distribution disk and store it in asafe location.

    On the host hard drive, make a directory calledthreadx or something similar. The ThreadX kernelfiles will reside in this directory.

    Copy all files from the ThreadX distribution CD-ROMinto the directory created in step 2.

    If the standard package was purchased, installationof ThreadX is now complete.

    i

    Step 1:

    Step 2:

    Step 3:

    Step 4:

  • 8/21/2019 ThreadX User Guide

    31/361

    Using ThreadX 31

    Express Logic, Inc.

    Application software needs access to the ThreadX

    library file (usually tx.aor tx.lib) and the C include

    files tx_api.hand tx_port.h. This is accomplished

    either by setting the appropriate path for the

    development tools or by copying these files into theapplication development area.

    Using ThreadX

    Using ThreadX is easy. Basically, the applicationcode must include tx_api.hduring compilation andlink with the ThreadX run-time library tx.a(or tx.lib).

    There are four steps required to build a ThreadXapplication:

    Include the tx_api.hfile in all application files thatuse ThreadX services or data structures.

    Create the standard C mainfunction. This functionmust eventually call tx_kernel_enterto startThreadX. Application-specific initialization that doesnot involve ThreadX may be added prior to entering

    the kernel.

    The ThreadX entry function tx_kernel_enterdoes

    not return. So be sure not to place any processing or

    function calls after it.

    Create the tx_application_definefunction. This iswhere the initial system resources are created.Examples of system resources include threads,queues, memory pools, event flags groups, mutexes,and semaphores.

    Compile application source and link with the ThreadXrun-time library tx.lib. The resulting image can bedownloaded to the target and executed!

    i

    Step 1:

    Step 2:

    i

    Step 3:

    Step 4:

  • 8/21/2019 ThreadX User Guide

    32/361

    32 Installation and Use of ThreadX

    User Guide

    Small Example System

    The small example system in Figure 1 on page 33shows the creation of a single thread with a priority of

    3. The thread executes, increments a counter, thensleeps for one clock tick. This process continuesforever.

  • 8/21/2019 ThreadX User Guide

    33/361

    Small Example System 33

    Express Logic, Inc.

    FIGURE 1. Template for Application Development

    Although this is a simple example, it provides a goodtemplate for real application development. Once again,please see the readme_threadx.txtfile for additionaldetails.

    #include "tx_api.h"

    unsigned long my_thread_counter = 0;

    TX_THREAD my_thread;

    main( )

    {

    /* Enter the ThreadX kernel. */

    tx_kernel_enter( );

    }

    void tx_application_define(void *first_unused_memory)

    {

    /* Create my_thread! */

    tx_thread_create(&my_thread, "My Thread",

    my_thread_entry, 0x1234, first_unused_memory, 1024,

    3, 3, TX_NO_TIME_SLICE, TX_AUTO_START);

    }

    void my_thread_entry(ULONG thread_input)

    {

    /* Enter into a forever loop. */

    while(1)

    {

    /* Increment thread counter. */

    my_thread_counter++;

    /* Sleep for 1 tick. */

    tx_thread_sleep(1);

    }

    }

  • 8/21/2019 ThreadX User Guide

    34/361

    34 Installation and Use of ThreadX

    User Guide

    Troubleshooting

    Each ThreadX port is delivered with a demonstrationapplication. It is always a good idea to first get the

    demonstration system runningeither on actualtarget hardware or simulated environment.

    See the readme_threadx.txtfile supplied with the

    distribution for more specific details regarding the

    demonstration system.

    If the demonstration system does not executeproperly, the following are some troubleshooting tips:

    1. Determine how much of the demonstration is

    running.

    2. Increase stack sizes (this is more important inactual application code than it is for thedemonstration).

    3. Rebuild the ThreadX library withTX_ENABLE_STACK_CHECKING defined. Thiswill enable the built-in ThreadX stack checking.

    4. Temporarily bypass any recent changes to see ifthe problem disappears or changes. Suchinformation should prove useful to Express Logicsupport engineers.

    Follow the procedures outlined in What We NeedFrom You on page 16to send the informationgathered from the troubleshooting steps.

    Configuration Options

    There are several configuration options whenbuilding the ThreadX library and the application usingThreadX. The options below can be defined in theapplication source, on the command line, or withinthe tx_user.hinclude file.

    i

  • 8/21/2019 ThreadX User Guide

    35/361

    Configuration Options 35

    Express Logic, Inc.

    Options defined in tx_user.h are applied only if the

    application and ThreadX library are built with

    TX_INCLUDE_USER_DEFINE_FILEdefined.

    Review the readme_threadx.txtfile for additionaloptions for your specific version of ThreadX. Thefollowing describes each configuration option in detail:

    i

  • 8/21/2019 ThreadX User Guide

    36/361

    36 Installation and Use of ThreadX

    User Guide

    Define Meaning

    TX_DISABLE_ERROR_CHECKING Bypasses basic service call errorchecking. When defined in theapplication source, all basicparameter error checking isdisabled. This may improveperformance by as much as 30%and may also reduce the imagesize. Of course, this optionshould only be used after theapplication is thoroughlydebugged. By default, this optionis not defined.

    ThreadX API returnvalues notaffectedby disabling errorchecking are listed inbold in the Return

    Values section of each APIdescription in Chapter 4. Thenon-bold return values are void iferror checking is disabled byusing theTX_DISABLE_ERROR_CHECKING

    option.

    TX_MAX_PRIORITIES Defines the priority levels forThreadX. Legal values rangefrom 32 through 1024 (inclusive)and mustbe evenly divisible by32. Increasing the number ofpriority levels supportedincreases the RAM usage by 128bytes for every group of 32priorities. However, there is onlya negligible effect on

    performance. By default, thisvalue is set to 32 priority levels.

    i

  • 8/21/2019 ThreadX User Guide

    37/361

    Configuration Options 37

    Express Logic, Inc.

    TX_MINIMUM_STACK Defines the minimum stack size(in bytes). It is used for error

    checking when threads arecreated. The default value isport-specific and is found intx_port.h.

    TX_TIMER_THREAD_STACK_SIZE Defines the stack size (in bytes)of the internal ThreadX systemtimer thread. This threadprocesses all thread sleeprequests as well as all servicecall timeouts. In addition, allapplication timer callback

    routines are invoked from thiscontext. The default value is port-specific and is found intx_port.h.

    TX_TIMER_THREAD_PRIORITY Defines the priority of the internalThreadX system timer thread.The default value is priority 0the highest priority in ThreadX.The default value is defined intx_port.h.

    TX_TIMER_PROCESS_IN_ISR When defined, eliminates the

    internal system timer thread forThreadX. This results inimproved performance on timerevents and smaller RAMrequirements because the timerstack and control block are nolonger needed. However, usingthis option moves all the timerexpiration processing to the timerISR level. By default, this optionis not defined.

    TX_REACTIVATE_INLINE When defined, performsreactivation of ThreadX timers in-line instead of using a functioncall. This improves performancebut slightly increases code size.By default, this option is notdefined.

    Define Meaning

  • 8/21/2019 ThreadX User Guide

    38/361

    38 Installation and Use of ThreadX

    User Guide

    TX_DISABLE_STACK_FILLING When defined, disables placingthe 0xEF value in each byte of

    each threads stack whencreated. By default, this option isnot defined.

    TX_ENABLE_STACK_CHECKING When defined, enables ThreadXrun-time stack checking, whichincludes analysis of how muchstack has been used andexamination of data patternfences before and after thestack area. If a stack error isdetected, the registered

    application stack error handler iscalled. This option does result inslightly increased overhead andcode size. Review thetx_thread_stack_error_notifyAPI for more information. Bydefault, this option is not defined.

    TX_DISABLE_PREEMPTION_THRESHOLD When defined, disables thepreemption-threshold featureand slightly reduces code sizeand improves performance. Ofcourse, the preemption-thresholdcapabilities are no longeravailable. By default, this optionis not defined.

    TX_DISABLE_REDUNDANT_CLEARING When defined, removes the logicfor initializing ThreadX global Cdata structures to zero. Thisshould only be used if thecompilers initialization code setsall un-initialized C global data tozero. Using this option slightlyreduces code size and improvesperformance during initialization.By default, this option is notdefined.

    Define Meaning

  • 8/21/2019 ThreadX User Guide

    39/361

    Configuration Options 39

    Express Logic, Inc.

    TX_DISABLE_NOTIFY_CALLBACKS When defined, disables the notifycallbacks for various ThreadX

    objects. Using this option slightlyreduces code size and improvesperformance. By default, thisoption is not defined.

    TX_BLOCK_POOL_ENABLE_PERFORMANCE_INFO When defined, enables thegathering of performanceinformation on block pools. Bydefault, this option is not defined.

    TX_BYTE_POOL_ENABLE_PERFORMANCE_INFO When defined, enables thegathering of performanceinformation on byte pools. By

    default, this option is not defined.TX_EVENT_FLAGS_ENABLE_PERFORMANCE_INFO When defined, enables the

    gathering of performanceinformation on event flagsgroups. By default, this option isnot defined.

    TX_MUTEX_ENABLE_PERFORMANCE_INFO When defined, enables thegathering of performanceinformation on mutexes. Bydefault, this option is not defined.

    TX_QUEUE_ENABLE_PERFORMANCE_INFO When defined, enables thegathering of performanceinformation on queues. Bydefault, this option is not defined.

    TX_SEMAPHORE_ENABLE_PERFORMANCE_INFO When defined, enables thegathering of performanceinformation on semaphores. Bydefault, this option is not defined.

    TX_THREAD_ENABLE_PERFORMANCE_INFO Defined, enables the gathering ofperformance information onthreads. By default, this option is

    not defined.TX_TIMER_ENABLE_PERFORMANCE_INFO Defined, enables the gathering of

    performance information ontimers. By default, this option isnot defined.

    Define Meaning

  • 8/21/2019 ThreadX User Guide

    40/361

    40 Installation and Use of ThreadX

    User Guide

    ThreadX Version ID

    The ThreadX version ID can be found in thereadme_threadx.txtfile. This file also contains a

    version history of the corresponding port. Applicationsoftware can obtain the ThreadX version byexamining the global string_tx_version_id.

  • 8/21/2019 ThreadX User Guide

    41/361

    User Guide

    C H A P T E R3

    Functional Components ofThreadX

    This chapter contains a description of the high-performance ThreadX kernel from a functionalperspective. Each functional component is presentedin an easy-to-understand manner.

    1 Execution Overview 44Initialization 44Thread Execution 44Interrupt Service Routines (ISR) 44Initialization 45

    Application Timers 46

    1 Memory Usage 46

    Static Memory Usage 46Dynamic Memory Usage 48

    1

    Initialization 48System Reset Vector 48Development Tool Initialization 49main Function 49tx_kernel_enter 49

    Application Definition Function 50Interrupts 50

    1 Thread Execution 50

    Thread Execution States 52Thread Entry/Exit Notification 54

    Thread Priorities 54Thread Scheduling 55Round-robin Scheduling 55Time-Slicing 55Preemption 56Preemption-Threshold 56Priority Inheritance 57Thread Creation 57

    http://-/?-http://-/?-
  • 8/21/2019 ThreadX User Guide

    42/361

    42 Functional Components of ThreadX

    User Guide

    Thread Control Block TX_THREAD 57Currently Executing Thread 59Thread Stack Area 59Memory Pitfalls 62

    Optional Run-time Stack Checking 62Reentrancy 62Thread Priority Pitfalls 63Priority Overhead 64Run-time Thread Performance Information 65Debugging Pitfalls 67

    1 Message Queues 67

    Creating Message Queues 68Message Size 68Message Queue Capacity 68

    Queue Memory Area 69Thread Suspension 69Queue Send Notification 70Queue Event-chaining 70Run-time Queue Performance Information 71Queue Control Block TX_QUEUE 72Message Destination Pitfall 72

    1 Counting Semaphores 72

    Mutual Exclusion 73Event Notification 73

    Creating Counting Semaphores 74Thread Suspension 74Semaphore Put Notification 74Semaphore Event-chaining 75Run-time Semaphore Performance Information 75Semaphore Control Block TX_SEMAPHORE 76Deadly Embrace 76Priority Inversion 78

    1 Mutexes 78

    Mutex Mutual Exclusion 79

    Creating Mutexes 79Thread Suspension 79Run-time Mutex Performance Information 80Mutex Control Block TX_MUTEX 81Deadly Embrace 81Priority Inversion 81

    1 Event Flags 82

  • 8/21/2019 ThreadX User Guide

    43/361

    43

    Express Logic, Inc.

    Creating Event Flags Groups 83Thread Suspension 83Event Flags Set Notification 83Event Flags Event-chaining 84

    Run-time Event Flags Performance Information 84Event Flags Group Control Block TX_EVENT_FLAGS_GROUP 8

    1 Memory Block Pools 85

    Creating Memory Block Pools 86Memory Block Size 86Pool Capacity 86Pools Memory Area 87Thread Suspension 87Run-time Block Pool Performance Information 87Memory Block Pool Control Block TX_BLOCK_POOL 88

    Overwriting Memory Blocks 891 Memory Byte Pools 89

    Creating Memory Byte Pools 89Pool Capacity 90Pools Memory Area 90Thread Suspension 90Run-time Byte Pool Performance Information 91Memory Byte Pool Control Block TX_BYTE_POOL 92Un-deterministic Behavior 92Overwriting Memory Blocks 93

    1Application Timers 93Timer Intervals 93Timer Accuracy 94Timer Execution 94Creating Application Timers 94Run-time Application Timer Performance Information 95

    Application Timer Control Block TX_TIMER 95Excessive Timers 96

    1 Relative Time 96

    1 Interrupts 97

    Interrupt Control 97ThreadX Managed Interrupts 97ISR Template 99High-frequency Interrupts 100Interrupt Latency 100

  • 8/21/2019 ThreadX User Guide

    44/361

    44 Functional Components of ThreadX

    User Guide

    Execution Overview

    There are four types of program execution within aThreadX application: Initialization, Thread Execution,

    Interrupt Service Routines (ISRs), and ApplicationTimers.

    Figure 2 on page 45shows each different type ofprogram execution. More detailed information abouteach of these types is found in subsequent sectionsof this chapter.

    Initialization As the name implies, this is the first type of programexecution in a ThreadX application. Initializationincludes all program execution between processorreset and the entry point of the thread schedulingloop.

    Thread Execution After initialization is complete, ThreadX enters itsthread scheduling loop. The scheduling loop looksfor an application thread ready for execution. When aready thread is found, ThreadX transfers control to it.

    After the thread is finished (or another higher-prioritythread becomes ready), execution transfers back tothe thread scheduling loop to find the next highestpriority ready thread.

    This process of continually executing and schedulingthreads is the most common type of programexecution in ThreadX applications.

    Interrupt ServiceRoutines (ISR)

    Interrupts are the cornerstone of real-time systems.Without interrupts it would be extremely difficult torespond to changes in the external world in a timelymanner. On detection of an interrupt, the processorsaves key information about the current programexecution (usually on the stack), then transfers

  • 8/21/2019 ThreadX User Guide

    45/361

    Execution Overview 45

    Express Logic, Inc.

    control to a predefined program area. Thispredefined program area is commonly called anInterrupt Service Routine.

    In most cases, interrupts occur during threadexecution (or in the thread scheduling loop).However, interrupts may also occur inside of anexecuting ISR or an Application Timer.

    HardwareReset

    Initialization

    Thread

    Execution

    InterruptService

    Routines

    ApplicationTimers

    Execution Overview

    FIGURE 2. Types of Program Execution

  • 8/21/2019 ThreadX User Guide

    46/361

    46 Functional Components of ThreadX

    User Guide

    Application Timers Application Timers are similar to ISRs, except thehardware implementation (usually a single periodichardware interrupt is used) is hidden from theapplication. Such timers are used by applications to

    perform time-outs, periodics, and/or watchdogservices. Just like ISRs, Application Timers mostoften interrupt thread execution. Unlike ISRs,however, Application Timers cannot interrupt eachother.

    Memory Usage

    ThreadX resides along with the application program.As a result, the static memory (or fixed memory)usage of ThreadX is determined by the developmenttools; e.g., the compiler, linker, and locator. Dynamicmemory (or run-time memory) usage is under directcontrol of the application.

    Static MemoryUsage

    Most of the development tools divide the applicationprogram image into five basic areas: instruction,constant, initialized data, uninitialized data, andsystem stack. Figure 3 on page 47shows anexample of these memory areas.

    It is important to understand that this is only anexample. The actual static memory layout is specificto the processor, development tools, and theunderlying hardware.

    The instruction area contains all of the programsprocessor instructions. This area is typically the

    largest and is often located in ROM.

    The constant area contains various compiledconstants, including strings defined or referencedwithin the program. In addition, this area contains theinitial copy of the initialized data area. During the

  • 8/21/2019 ThreadX User Guide

    47/361

    Memory Usage 47

    Express Logic, Inc.

    compilers initialization process, this portion of theconstant area is used to set up the initialized dataarea in RAM. The constant area usually follows theinstruction area and is often located in ROM.

    The initialized data and uninitialized data areascontain all of the global and static variables. Theseareas are always located in RAM.

    The system stack is generally set up immediatelyfollowing the initialized and uninitialized data areas.

    Instruction Area

    Static Memory Usage

    (example)

    0x00000000

    ROM

    Constant AreaROM

    0x80000000 Initialized Data Area

    RAM

    Uninitialized Data Area

    RAM

    System Stack Area

    Indicates ThreadXUsage

    addresses

    FIGURE 3. Memory Area Example

  • 8/21/2019 ThreadX User Guide

    48/361

    48 Functional Components of ThreadX

    User Guide

    The system stack is used by the compiler duringinitialization, then by ThreadX during initializationand, subsequently, in ISR processing.

    Dynamic MemoryUsage

    As mentioned before, dynamic memory usage isunder direct control of the application. Control blocksand memory areas associated with stacks, queues,and memory pools can be placed anywhere in thetargets memory space. This is an important featurebecause it facilitates easy utilization of different typesof physical memory.

    For example, suppose a target hardware

    environment has both fast memory and slowmemory. If the application needs extra performancefor a high-priority thread, its control block(TX_THREAD) and stack can be placed in the fastmemory area, which may greatly enhance itsperformance.

    Initialization

    Understanding the initialization process is important.The initial hardware environment is set up here. Inaddition, this is where the application is given itsinitial personality.

    ThreadX attempts to utilize (whenever possible) the

    complete development tools initialization process.

    This makes it easier to upgrade to new versions of

    the development tools in the future.

    System ResetVector

    All microprocessors have reset logic. When a resetoccurs (either hardware or software), the address ofthe applications entry point is retrieved from a

    i

  • 8/21/2019 ThreadX User Guide

    49/361

    Initialization 49

    Express Logic, Inc.

    specific memory location. After the entry point isretrieved, the processor transfers control to thatlocation.

    The application entry point is quite often written in the

    native assembly language and is usually supplied bythe development tools (at least in template form). Insome cases, a special version of the entry program issupplied with ThreadX.

    Development ToolInitialization

    After the low-level initialization is complete, controltransfers to the development tools high-levelinitialization. This is usually the place whereinitialized global and static C variables are set up.

    Remember their initial values are retrieved from theconstant area. Exact initialization processing isdevelopment tool specific.

    main Function When the development tool initialization is complete,control transfers to the user-supplied mainfunction.

    At this point, the application controls what happensnext. For most applications, the main function simplycalls tx_kernel_enter, which is the entry into

    ThreadX. However, applications can performpreliminary processing (usually for hardwareinitialization) prior to entering ThreadX.

    The call to tx_kernel_enter does not return, so do not

    place any processing after it!

    tx_kernel_enter The entry function coordinates initialization of variousinternal ThreadX data structures and then calls the

    applications definition functiontx_application_define.

    When tx_application_definereturns, control istransferred to the thread scheduling loop. This marksthe end of initialization!

    i

  • 8/21/2019 ThreadX User Guide

    50/361

    50 Functional Components of ThreadX

    User Guide

    ApplicationDefinitionFunction

    The tx_application_definefunction defines all of theinitial application threads, queues, semaphores,mutexes, event flags, memory pools, and timers. It isalso possible to create and delete system resources

    from threads during the normal operation of theapplication. However, all initial application resourcesare defined here.

    The tx_application_definefunction has a single inputparameter and it is certainly worth mentioning. Thefirst-availableRAM address is the sole inputparameter to this function. It is typically used as astarting point for initial run-time memory allocationsof thread stacks, queues, and memory pools.

    After initialization is complete, only an executing

    thread can create and delete system resources

    including other threads. Therefore, at least one

    thread must be created during initialization.

    Interrupts Interrupts are left disabled during the entireinitialization process. If the application somehowenables interrupts, unpredictable behavior mayoccur. Figure 4 on page 51shows the entireinitialization process, from system reset throughapplication-specific initialization.

    Thread Execution

    Scheduling and executing application threads is themost important activity of ThreadX. A thread istypically defined as a semi-independent program

    segment with a dedicated purpose. The combinedprocessing of all threads makes an application.

    Threads are created dynamically by callingtx_thread_createduring initialization or during threadexecution. Threads are created in either a readyorsuspendedstate.

    i

  • 8/21/2019 ThreadX User Guide

    51/361

    Thread Execution 51

    Express Logic, Inc.

    Initialization Process

    entry point*

    development tool initialization*

    System Reset Vector

    main( )

    tx_kernel_enter( )

    tx_application_define(mem_ptr)

    Enter threadscheduling loop

    * denotes functions that aredevelopment-tool specific

    FIGURE 4. Initialization Process

  • 8/21/2019 ThreadX User Guide

    52/361

    52 Functional Components of ThreadX

    User Guide

    Thread ExecutionStates

    Understanding the different processing states ofthreads is a key ingredient to understanding theentire multithreaded environment. In ThreadX thereare five distinct thread states: ready, suspended,

    executing, terminated, and completed. Figure 5shows the thread state transition diagram forThreadX.

    Ready

    Executing

    StateSuspended

    State

    State

    Completed Terminated State State

    tx_thread_create

    TerminateService

    ThreadScheduling

    Serviceswith Suspension

    SelfSuspend

    SelfTerminate

    ReturnFrom ThreadEntry Function

    TX_AUTO_START TX_DONT_START

    FIGURE 5. Thread State Transition

  • 8/21/2019 ThreadX User Guide

    53/361

    Thread Execution 53

    Express Logic, Inc.

    A thread is in a readystate when it is ready forexecution. A ready thread is not executed until it isthe highest priority thread in ready state. When thishappens, ThreadX executes the thread, which then

    changes its state to executing.

    If a higher-priority thread becomes ready, theexecuting thread reverts back to a readystate. Thenewly ready high-priority thread is then executed,which changes its logical state to executing. Thistransition between readyand executingstates occursevery time thread preemption occurs.

    At any given moment, only one thread is in anexecutingstate. This is because a thread in theexecutingstate has control of the underlyingprocessor.

    Threads in a suspendedstate are not eligible forexecution. Reasons for being in a suspended stateinclude suspension for time, queue messages,semaphores, mutexes, event flags, memory, andbasic thread suspension. After the cause forsuspension is removed, the thread is placed back ina readystate.

    A thread in a completedstate is a thread that hascompleted its processing and returned from its entryfunction. The entry function is specified during threadcreation. A thread in a completed state cannotexecute again.

    A thread is in a terminatedstate because anotherthread or the thread itself called thetx_thread_terminate service. A thread in a terminatedstate cannot execute again.

    If re-starting a completed or terminated thread is

    desired, the application must first delete the thread. It

    can then be re-created and re-started.i

  • 8/21/2019 ThreadX User Guide

    54/361

    54 Functional Components of ThreadX

    User Guide

    Thread Entry/ExitNotification

    Some applications may find it advantageous to benotified when a specific thread is entered for the firsttime, when it completes, or is terminated. ThreadXprovides this ability through the

    tx_thread_entry_exit_notifyservice. This serviceregisters an application notification function for aspecific thread, which is called by ThreadX wheneverthe thread starts running, completes, or isterminated. After being invoked, the applicationnotification function can perform the application-specific processing. This typically involves informinganother application thread of the event via a ThreadXsynchronization primitive.

    Thread Priorities As mentioned before, a thread is a semi-independentprogram segment with a dedicated purpose.However, all threads are not created equal! Thededicated purpose of some threads is much moreimportant than others. This heterogeneous type ofthread importance is a hallmark of embedded real-time applications.

    ThreadX determines a threads importance when thethread is created by assigning a numerical valuerepresenting itspriority. The maximum number ofThreadX priorities is configurable from 32 through1024 in increments of 32. The actual maximumnumber of priorities is determined by theTX_MAX_PRIORITIESconstant during compilationof the ThreadX library. Having a larger number ofpriorities does not significantly increase processingoverhead. However, for each group of 32 prioritylevels an additional 128 bytes of RAM is required tomanage them. For example, 32 priority levels require

    128 bytes of RAM, 64 priority levels require 256bytes of RAM, and 96 priority levels requires 384bytes of RAM.

    By default, ThreadX has 32 priority levels, rangingfrom priority 0 through priority 31. Numerically

  • 8/21/2019 ThreadX User Guide

    55/361

    Thread Execution 55

    Express Logic, Inc.

    smaller values imply higher priority. Hence, priority 0represents the highest priority, while priority(TX_MAX_PRIORITIES-1) represents the lowestpriority.

    Multiple threads can have the same priority relyingon cooperative scheduling or timeslicing. In addition,thread priorities can be changed during run-time.

    Thread Scheduling ThreadX schedules threads based on their priority.The ready thread with the highest priority is executedfirst. If multiple threads of the same priority are ready,they are executed in a first-in-first-out(FIFO)

    manner.

    Round-robinScheduling

    ThreadX supportsround-robinscheduling of multiplethreads having the same priority. This isaccomplished through cooperative calls totx_thread_relinquish. This service gives all otherready threads of the same priority a chance toexecute before the tx_thread_relinquishcallerexecutes again.

    Time-Slicing Time-slicingis another form of round-robinscheduling. A time-slice specifies the maximumnumber of timer ticks (timer interrupts) that a threadcan execute without giving up the processor. InThreadX, time-slicing is available on a per-threadbasis. The threads time-slice is assigned duringcreation and can be modified during run-time. Whena time-slice expires, all other ready threads of the

    same priority level are given a chance to executebefore the time-sliced thread executes again.

    A fresh thread time-slice is given to a thread after itsuspends, relinquishes, makes a ThreadX servicecall that causes preemption, or is itself time-sliced.

  • 8/21/2019 ThreadX User Guide

    56/361

    56 Functional Components of ThreadX

    User Guide

    When a time-sliced thread is preempted, it willresume before other ready threads of equal priorityfor the remainder of its time-slice.

    Using time-slicing results in a slight amount ofsystem overhead. Because time-slicing is only useful

    in cases in which multiple threads share the same

    priority, threads having a unique priority should not

    be assigned a time-slice.

    Preemption Preemption is the process of temporarily interruptingan executing thread in favor of a higher-prioritythread. This process is invisible to the executing

    thread. When the higher-priority thread is finished,control is transferred back to the exact place wherethe preemption took place.

    This is a very important feature in real-time systemsbecause it facilitates fast response to importantapplication events. Although a very importantfeature, preemption can also be a source of a varietyof problems, including starvation, excessiveoverhead, and priority inversion.

    Preemption-Threshold

    To ease some of the inherent problems ofpreemption, ThreadX provides a unique andadvanced feature calledpreemption-threshold.

    A preemption-threshold allows a thread to specify apriority ceilingfor disabling preemption. Threads thathave higher priorities than the ceiling are still allowedto preempt, while those less than the ceiling are notallowed to preempt.

    For example, suppose a thread of priority 20 onlyinteracts with a group of threads that have prioritiesbetween 15 and 20. During its critical sections, thethread of priority 20 can set its preemption-thresholdto 15, thereby preventing preemption from all of the

    i

  • 8/21/2019 ThreadX User Guide

    57/361

    Thread Execution 57

    Express Logic, Inc.

    threads that it interacts with. This still permits reallyimportant threads (priorities between 0 and 14) topreempt this thread during its critical sectionprocessing, which results in much more responsive

    processing.

    Of course, it is still possible for a thread to disable allpreemption by setting its preemption-threshold to 0.In addition, preemption-threshold can be changedduring run-time.

    Using preemption-threshold disables time-slicing for

    the specified thread.

    PriorityInheritance

    ThreadX also supports optional priority inheritancewithin its mutex services described later in thischapter. Priority inheritance allows a lower prioritythread to temporarily assume the priority of a highpriority thread that is waiting for a mutex owned bythe lower priority thread. This capability helps theapplication to avoid un-deterministic priority inversionby eliminating preemption of intermediate threadpriorities. Of course,preemption-threshold may beused to achieve a similar result.

    Thread Creation Application threads are created during initialization orduring the execution of other application threads.There is no limit on the number of threads that canbe created by an application.

    Thread Control

    Block TX_THREAD

    The characteristics of each thread are contained in

    its control block. This structure is defined in thetx_api.hfile.

    A threads control block can be located anywhere inmemory, but it is most common to make the control

    i

  • 8/21/2019 ThreadX User Guide

    58/361

    58 Functional Components of ThreadX

    User Guide

    block a global structure by defining it outside thescope of any function.

    Locating the control block in other areas requires a

    bit more care, just like all dynamically allocatedmemory. If a control block is allocated within a Cfunction, the memory associated with it is part of thecalling threads stack. In general, avoid using localstorage for control blocks because after the functionreturns, all of its local variable stack space isreleasedregardless of whether another thread isusing it for a control block!

    In most cases, the application is oblivious to thecontents of the threads control block. However,there are some situations, especially during debug,in which looking at certain members is useful. Thefollowing are some of the more useful control blockmembers:

    tx_thread_run_count

    contains a counter of thenumber of many times thethread has been scheduled. Anincreasing counter indicates the

    thread is being scheduled andexecuted.

    tx_thread_state contains the state of theassociated thread. The followinglists the possible thread states:

    TX_READY (0x00)

    TX_COMPLETED (0x01)

    TX_TERMINATED (0x02)

    TX_SUSPENDED (0x03)

    TX_SLEEP (0x04)TX_QUEUE_SUSP (0x05)

    TX_SEMAPHORE_SUSP (0x06)

    TX_EVENT_FLAG (0x07)

    TX_BLOCK_MEMORY (0x08)

    TX_BYTE_MEMORY (0x09)

    TX_MUTEX_SUSP (0x0D)

  • 8/21/2019 ThreadX User Guide

    59/361

    Thread Execution 59

    Express Logic, Inc.

    Of course there are many other interesting fields in

    the thread control block, including the stack pointer,

    time-slice value, priorities, etc. Users are welcome to

    review control block members, but modifications are

    strictly prohibited!

    There is no equate for the executing state

    mentioned earlier in this section. It is not necessary

    because there is only one executing thread at a

    given time. The state of an executing thread is also

    TX_READY.

    Currently

    Executing Thread

    As mentioned before, there is only one thread

    executing at any given time. There are several waysto identify the executing thread, depending on whichthread is making the request.

    A program segment can get the control blockaddress of the executing thread by callingtx_thread_identify. This is useful in shared portionsof application code that are executed from multiplethreads.

    In debug sessions, users can examine the internal

    ThreadX pointer_tx_thread_current_ptr. It containsthe control block address of the currently executingthread. If this pointer is NULL, no application threadis executing; i.e., ThreadX is waiting in its schedulingloop for a thread to become ready.

    Thread Stack Area Each thread must have its own stack for saving thecontext of its last execution and compiler use. Most Ccompilers use the stack for making function calls and

    for temporarily allocating local variables. Figure 6 onpage 60shows a typical threads stack.

    Where a thread stack is located in memory is up tothe application. The stack area is specified duringthread creation and can be located anywhere in the

    i

    i

  • 8/21/2019 ThreadX User Guide

    60/361

    60 Functional Components of ThreadX

    User Guide

    targets address space. This is an important featurebecause it allows applications to improve

    performance of important threads by placing theirstack in high-speed RAM.

    How big a stack should be is one of the mostfrequently asked questions about threads. A threadsstack area must be large enough to accommodateworst-case function call nesting, local variableallocation, and saving its last execution context.

    The minimum stack size, TX_MINIMUM_STACK, isdefined by ThreadX. A stack of this size supportssaving a threads context and minimum amount offunction calls and local variable allocation.

    For most threads, however, the minimum stack sizeis too small, and the user must ascertain the worst-case size requirement by examining function-call

    FIGURE 6. Typical Thread Stack

    Stack Memory Area

    0x0000F200

    physical

    (example)

    0x0000FC00

    addresses

    tx_stack_ptr

    Threads lastexecution context

    Local variables andC function nesting

    Typicalrun-timestackgrowth

  • 8/21/2019 ThreadX User Guide

    61/361

    Thread Execution 61

    Express Logic, Inc.

    nesting and local variable allocation. Of course, it isalways better to start with a larger stack area.

    After the application is debugged, it is possible to

    tune the thread stack sizes if memory is scarce. Afavorite trick is to preset all stack areas with an easilyidentifiable data pattern like (0xEFEF) prior tocreating the threads. After the application has beenthoroughly put through its paces, the stack areas canbe examined to see how much stack was actuallyused by finding the area of the stack where the datapattern is still intact. Figure 7shows a stack preset to0xEFEF after thorough thread execution.

    By default, ThreadX initializes every byte of each

    thread stack with a value of 0xEF.

    Stack Memory Area

    0x0000F200

    physical

    (another example)

    0x0000FC00

    addresses

    tx_stack_ptr

    Threads lastexecution context

    Local variables andC function nesting

    Typicalrun-timestackgrowth

    EFEFEFEFEFEFEFEFEFEF

    000000010002

    UnusedStack

    Area

    FIGURE 7. Stack Preset to 0xEFEF

    i

  • 8/21/2019 ThreadX User Guide

    62/361

    62 Functional Components of ThreadX

    User Guide

    Memory Pitfalls The stack requirements for threads can be large.Therefore, it is important to design the application tohave a reasonable number of threads. Furthermore,some care must be taken to avoid excessive stack

    usage within threads. Recursive algorithms and largelocal data structures should be avoided.

    In most cases, an overflowed stack causes threadexecution to corrupt memory adjacent (usuallybefore) its stack area. The results are unpredictable,but most often result in an un-natural change in theprogram counter. This is often called jumping intothe weeds. Of course, the only way to prevent this isto ensure all thread stacks are large enough.

    Optional Run-timeStack Checking

    ThreadX provides the ability to check each thread'sstack for corruption during run-time. By default,ThreadX fills every byte of thread stacks with a 0xEFdata pattern during creation. If the application buildsthe ThreadX library withTX_ENABLE_STACK_CHECKINGdefined,ThreadX will examine each thread's stack forcorruption as it is suspended or resumed. If stackcorruption is detected, ThreadX will call theapplication's stack error handling routine as specifiedby the call to tx_thread_stack_error_notify.Otherwise, if no stack error handler was specified,ThreadX will call the internal

    _tx_thread_stack_error_handlerroutine.

    Reentrancy One of the real beauties of multithreading is that thesame C function can be called from multiple threads.This provides great power and also helps reducecode space. However, it does require that Cfunctions called from multiple threads are reentrant.

    Basically, a reentrant function stores the callersreturn address on the current stack and does not relyon global or static C variables that it previously set

  • 8/21/2019 ThreadX User Guide

    63/361

    Thread Execution 63

    Express Logic, Inc.

    up. Most compilers place the return address on thestack. Hence, application developers must only worryabout the use of globalsand statics.

    An example of a non-reentrant function is the stringtoken function strtok found in the standard C library.This function remembers the previous string pointeron subsequent calls. It does this with a static stringpointer. If this function is called from multiple threads,it would most likely return an invalid pointer.

    Thread PriorityPitfalls

    Selecting thread priorities is one of the mostimportant aspects of multithreading. It is sometimes

    very tempting to assign priorities based on aperceived notion of thread importance rather thandetermining what is exactly required during run-time.Misuse of thread priorities can starve other threads,create priority inversion, reduce processingbandwidth, and make the applications run-timebehavior difficult to understand.

    As mentioned before, ThreadX provides a priority-based, preemptive scheduling algorithm. Lowerpriority threads do not execute until there are no

    higher priority threads ready for execution. If a higherpriority thread is always ready, the lower prioritythreads never execute. This condition is calledthread starvation.

    Most thread starvation problems are detected earlyin debug and can be solved by ensuring that higherpriority threads dont execute continuously.

    Alternatively, logic can be added to the applicationthat gradually raises the priority of starved threads

    until they get a chance to execute.

    Another pitfall associated with thread priorities ispriority inversion. Priority inversion takes place whena higher priority thread is suspended because alower priority thread has a needed resource. Of

  • 8/21/2019 ThreadX User Guide

    64/361

    64 Functional Components of ThreadX

    User Guide

    course, in some instances it is necessary for twothreads of different priority to share a commonresource. If these threads are the only ones active,the priority inversion time is bounded by the time the

    lower priority thread holds the resource. Thiscondition is both deterministic and quite normal.However, if threads of intermediate priority becomeactive during this priority inversion condition, thepriority inversion time is no longer deterministic andcould cause an application failure.

    There are principally three distinct methods ofpreventing un-deterministic priority inversion inThreadX. First, the application priority selections andrun-time behavior can be designed in a manner thatprevents the priority inversion problem. Second,lower priority threads can utilizepreemption-threshold to block preemption from intermediatethreads while they share resources with higherpriority threads. Finally, threads using ThreadXmutex objects to protect system resources mayutilize the optional mutexpriority inheritancetoeliminate un-deterministic priority inversion.

    Priority Overhead One of the most overlooked ways to reduceoverhead in multithreading is to reduce the numberof context switches. As previously mentioned, acontext switch occurs when execution of a higherpriority thread is favored over that of the executingthread. It is worthwhile to mention that higher prioritythreads can become ready as a result of bothexternal events (like interrupts) and from servicecalls made by the executing thread.

    To illustrate the effects thread priorities have oncontext switch overhead, assume a three threadenvironment with threads named thread_1, thread_2,and thread_3. Assume further that all of the threadsare in a state of suspension waiting for a message.When thread_1 receives a message, it immediately

  • 8/21/2019 ThreadX User Guide

    65/361

    Thread Execution 65

    Express Logic, Inc.

    forwards it to thread_2. Thread_2 then forwards themessage to thread_3. Thread_3 just discards themessage. After each thread processes its message,it goes back and waits for another message.

    The processing required to execute these threethreads varies greatly depending on their priorities. Ifall of the threads have the same priority, a singlecontext switch occurs before the execution of eachthread. The context switch occurs when each threadsuspends on an empty message queue.

    However, if thread_2 is higher priority than thread_1and thread_3 is higher priority than thread_2, thenumber of context switches doubles. This is becauseanother context switch occurs inside of thetx_queue_sendservice when it detects that a higherpriority thread is now ready.

    The ThreadX preemption-threshold mechanism canavoid these extra context switches and still allow thepreviously mentioned priority selections. This is animportant feature because it allows several threadpriorities during scheduling, while at the same timeeliminating some of the unwanted context switching

    between them during thread execution.

    Run-time ThreadPerformanceInformation

    ThreadX provides optional run-time threadperformance information. If the ThreadX library andapplication is built withTX_THREAD_ENABLE_PERFORMANCE_INFOdefined, ThreadX accumulates the followinginformation:

    Total number for the overall system: thread resumptions

    thread suspensions

    service call preemptions

    interrupt preemptions

  • 8/21/2019 ThreadX User Guide

    66/361

    66 Functional Components of ThreadX

    User Guide

    priority inversions

    time-slices

    relinquishes

    thread timeouts suspension aborts

    idle system returns

    non-idle system returns

    Total number for each thread:

    resumptions

    suspensions

    service call preemptions

    interrupt preemptions

    priority inversions

    time-slices

    thread relinquishes

    thread timeouts

    suspension aborts

    This information is available at run-time through theservices tx_thread_performance_info_getand

    tx_thread_performance_system_info_get. Threadperformance information is useful in determining ifthe application is behaving properly. It is also usefulin optimizing the application. For example, arelatively high number of service call preemptionsmight suggest the threads priority and/orpreemption-threshold is too low. Furthermore, arelatively low number of idle system returns mightsuggest that lower priority threads are notsuspending enough.

    Debugging Pitfalls Debugging multithreaded applications is a little moredifficult because the same program code can beexecuted from multiple threads. In such cases, abreak-point alone may not be enough. The debugger

  • 8/21/2019 ThreadX User Guide

    67/361

    Message Queues 67

    Express Logic, Inc.

    must also view the current thread pointer_tx_thread_current_ptrusing a conditionalbreakpoint to see if the calling thread is the one todebug.

    Much of this is being handled in multithreadingsupport packages offered through variousdevelopment tool vendors. Because of its simpledesign, integrating ThreadX with differentdevelopment tools is relatively easy.

    Stack size is always an important debug topic inmultithreading. Whenever unexplained behavior isobserved, it is usually a good first guess to increasestack sizes for all threadsespecially the stack sizeof the last thread to execute!

    It is also a good idea to build the ThreadX library with

    TX_ENABLE_STACK_CHECKING defined. This will

    help isolate stack corruption problems as early in the

    processing as possible!

    Message Queues

    Message queues are the primary means of inter-thread communication in ThreadX. One or moremessages can reside in a message queue. Amessage queue that holds a single message iscommonly called a mailbox.

    Messages are copied to a queue by tx_queue_sendand are copied from a queue by tx_queue_receive.The only exception to this is when a thread issuspended while waiting for a message on an empty

    queue. In this case, the next message sent to thequeue is placed directly into the threads destinationarea.

    i

  • 8/21/2019 ThreadX User Guide

    68/361

    68 Functional Components of ThreadX

    User Guide

    Each message queue is a public resource. ThreadXplaces no constraints on how message queues areused.

    Creating MessageQueues

    Message queues are created either duringinitialization or during run-time by applicationthreads. There is no limit on the number of messagequeues in an application.

    Message Size Each message queue supports a number of fixed-sized messages. The available message sizes are 1through 16 32-bit words inclusive. The message size

    is specified when the queue is created.

    Application messages greater than 16 words must bepassed by pointer. This is accomplished by creatinga queue with a message size of 1 word (enough tohold a pointer) and then sending and receivingmessage pointers instead of the entire message.

    Message Queue

    Capacity

    The number of messages a queue can hold is a

    function of its message size and the size of thememory area supplied during creation. The totalmessage capacity of the queue is calculated bydividing the number of bytes in each message intothe total number of bytes in the supplied memoryarea.

    For example, if a message queue that supports amessage size of 1 32-bit word (4 bytes) is createdwith a 100-byte memory area, its capacity is 25

    messages.

  • 8/21/2019 ThreadX User Guide

    69/361

    Message Queues 69

    Express Logic, Inc.

    Queue MemoryArea

    As mentioned before, the memory area for bufferingmessages is specified during queue creation. Likeother memory areas in ThreadX, it can be locatedanywhere in the targets address space.

    This is an important feature because it gives theapplication considerable flexibility. For example, anapplication might locate the memory area of animportant queue in high-speed RAM to improveperformance.

    ThreadSuspension

    Application threads can suspend while attempting tosend or receive a message from a queue. Typically,

    thread suspension involves waiting for a messagefrom an empty queue. However, it is also possible fora thread to suspend trying to send a message to afull queue.

    After the condition for suspension is resolved, theservice requested is completed and the waitingthread is resumed. If multiple threads are suspendedon the same queue, they are resumed in the orderthey were suspended (FIFO).

    However, priority resumption is also possible if theapplication calls tx_queue_prioritizeprior to thequeue service that lifts thread suspension. Thequeue prioritize service places the highest prioritythread at the front of the suspension list, whileleaving all other suspended threads in the sameFIFO order.

    Time-outs are also available for all queuesuspensions. Basically, a time-out specifies the

    maximum number of timer ticks the thread will staysuspended. If a time-out occurs, the thread isresumed and the service returns with the appropriateerror code.

  • 8/21/2019 ThreadX User Guide

    70/361

    70 Functional Components of ThreadX

    User Guide

    Queue SendNotification

    Some applications may find it advantageous to benotified whenever a message is placed on a queue.ThreadX provides this ability through thetx_queue_send_notifyservice. This service registers

    the supplied application notification function with thespecified queue. ThreadX will subsequently invokethis application notification function whenever amessage is sent to the queue. The exact processingwithin the application notification function isdetermined by the application; however, it typicallyconsists of resuming the appropriate thread forprocessing the new message.

    Queue Event-chaining

    The notification capabilities in ThreadX can be usedto chain various synchronization events together.This is typically useful when a single thread mustprocess multiple synchronization events.

    For example, suppose a single thread is responsiblefor processing messages from five different queuesand must also suspend when no messages areavailable. This is easily accomplished by registeringan application notification function for each queueand introducing an additional counting semaphore.Specifically, the application notification functionperforms a tx_semaphore_putwhenever it is called(the semaphore count represents the total number ofmessages in all five queues). The processing threadsuspends on this semaphore via thetx_semaphore_getservice. When the semaphore isavailable (in this case, when a message isavailable!), the processing thread is resumed. It theninterrogates each queue for a message, processesthe found message, and performs another

    tx_semaphore_getto wait for the next message.Accomplishing this without event-chaining is quitedifficult and likely would require more threads and/oradditional application code.

  • 8/21/2019 ThreadX User Guide

    71/361

    Message Queues 71

    Express Logic, Inc.

    In general, event-chainingresults in fewer threads,less overhead, and smaller RAM requirements. Italso provides a highly flexible mechanism to handlesynchronization requirements of more complex

    systems.

    Run-time QueuePerformanceInformation

    ThreadX provides optional run-time queueperformance information. If the ThreadX library andapplication is built withTX_QUEUE_ENABLE_PERFORMANCE_INFOdefined, ThreadX accumulates the followinginformation:

    Total number for the overall system: messages sent

    messages received

    queue empty suspensions

    queue full suspensions

    queue full error returns (suspension not specified)

    queue timeouts

    Total number for each queue:

    messages sent

    messages received

    queue empty suspensions

    queue full suspensions

    queue full error returns (suspension not specified)

    queue timeouts

    This information is available at run-time through theservices tx_queue_performance_info_getandtx_queue_performance_system_info_get. Queueperformance information is useful in determining ifthe application is behaving properly. It is also usefulin optimizing the application. For example, arelatively high number of queue full suspensions

  • 8/21/2019 ThreadX User Guide

    72/361

    72 Functional Components of ThreadX

    User Guide

    suggests an increase in the queue size might bebeneficial.

    Queue ControlBlock TX_QUEUE

    The characteristics of each message queue arefound in its control block. It contains interestinginformation such as the number of messages in thequeue. This structure is defined in the tx_api.hfile.

    Message queue control blocks can also be locatedanywhere in memory, but it is most common to makethe control block a global structure by defining itoutside the scope of any function.

    MessageDestination Pitfall

    As mentioned previously, messages are copiedbetween the queue area and application data areas.It is important to ensure the destination for a receivedmessage is large enough to hold the entire message.If not, the memory following the message destinationwill likely be corrupted.

    This is especially lethal when a too-small message

    destination is on the stacknothing like corrupting

    the return address of a function!

    Counting Semaphores

    ThreadX provides 32-bit counting semaphores thatrange in value between 0 and 4,294,967,295. Thereare two operations for counting semaphores:tx_semaphore_getand tx_semaphore_put. The getoperation decreases the semaphore by one. If the

    semaphore is 0, the get operation is not successful.The inverse of the get operation is the put operation.It increases the semaphore by one.

    !

  • 8/21/2019 ThreadX User Guide

    73/361

    Counting Semaphores 73

    Express Logic, Inc.

    Each counting semaphore is a public resource.ThreadX places no constraints on how countingsemaphores are used.

    Counting semaphores are typically used formutualexclusion. However, counting semaphores can alsobe used as a method for event notification.

    Mutual Exclusion Mutual exclusion pertains to controlling the access ofthreads to certain application areas (also calledcritical sectionsorapplication resources). Whenused for mutual exclusion, the current count of asemaphore represents the total number of threads

    that are allowed access. In most cases, countingsemaphores used for mutual exclusion will have aninitial value of 1, meaning that only one thread canaccess the associated resource at a time. Countingsemaphores that only have values of 0 or 1 arecommonly called binary semaphores.

    If a binary semaphore is being used, the user must

    prevent the same thread from perfo