13.1. Main Targets

Important

When deploying to a target hardware platform, the library included in the lib_target directory of the generated solver should be used instead of the library in the lib directory.

Main targets include:

  • x86 platforms

  • x86_64 platforms

  • 32bit ARM-Cortex-A platforms

  • 32bit ARM-Cortex-M platforms (no shared libraries)

  • 64bit ARM-Cortex-A platforms (AARCH64 toolchain)

  • 64bit ARM-Cortex-A platforms (Integrity toolchain)

  • NVIDIA platforms with ARM-Cortex-A processors

  • PowerPC platforms with GCC compiler

  • National Instruments compactRIO platforms with NILRT GCC compiler (Linux RTOS)

You can check here to find the correct naming option for each platform.

13.1.1. High-level interface

The steps to deploy and simulate a FORCESPRO controller on most targets are detailed below.

1. In the High-level interface example BasicExample.m set the code generation options:

codeoptions.platform = '<platform_name>'; % to specify the platform
codeoptions.printlevel = 0; % optional, on some platforms printing is not supported
codeoptions.cleanup = 0; % to keep necessary files for target compile

and then generate the code for your solver (henceforth referred to as “FORCESNLPsolver”, placed in the folder “BasicExample”) using the high-level interface.

2. Additionally to your solver you will receive the following files generated by CasADi:

For a MATLAB solver generation the following files will be generated:

  • FORCESNLPsolver_adtool2forces.c

  • FORCESNLPsolver_casadi.c

  • FORCESNLPsolver_casadi.h

and for a Python solver generation the following files will be generated:

  • FORCESNLPsolver_interface.c

  • FORCESNLPsolver_model.c

  • FORCESNLPsolver_model.h

In further steps we’ll be using the MATLAB naming of the files but their use should be equivalent.

3. For most target platforms you will receive the following compiled files:

  • For MinGW/Linux/MacOS:

    • a static library file libFORCESNLPsolver.a inside the folder lib_target

    • a shared library file libFORCESNLPsolver.so inside the folder lib_target

  • For Windows:

    • a static library file FORCESNLPsolver_static.lib inside the folder lib_target

    • a dynamic library file FORCESNLPsolver.dll with its definition file for compilation FORCESNLPsolver.lib inside the folder lib_target

You need only one of those to build the solver.

Important

The shared library and the dynamic library if used for building need to be present during runtime as well.

4. Create an interface to call the solver and perform a simulation/test.

You can find a C interface for this example here.

Refer to section C interface: memory allocations for more information on controlling memory allocation within the C interface.

5. Copy in the target platform:

  • The FORCESNLPsolver folder

  • The source files from step 2

  • The interface from step 4

6. Compile the solver. The compilation command would be (supposing you are in the directory which contains the FORCESNLPsolver folder):

<Compiler_exec> HighLevel_BasicExample.c FORCESNLPsolver_adtool2forces.c FORCESNLPsolver_casadi.c <compiled_solver> <additional_libs>

Where:

  • <Compiler_exec> would be the compiler used in the target

  • <compiled_solver> would be one of the compiled files of step 3

  • <additional_libs> would be possible libraries that need to be linked to resolve existing dependencies.

    • For Linux/MacOS it’s usually necessary to link the math library (-lm)

    • For Windows you usually need to link the iphlpapi.lib library (it’s distributed with the Intel Compiler, MinGW as well as MATLAB) and unless you’re using MinGW some additional intel libraries (those are included in the FORCESPRO client under the folder libs_Intel – if missing they are downloaded after code generation)

13.1.2. Low-level interface

The steps to deploy and simulate a FORCESPRO controller on most targets are detailed below.

1. In the Low-level interface example BasicExample.m set the code generation options:

codeoptions.platform = '<platform_name>'; % to specify the platform
codeoptions.printlevel = 0; % optional, on some platforms printing is not supported

and then generate the code for your solver (henceforth referred to as “FORCESNLPsolver”, placed in the folder “BasicExample”) using the low-level interface.

2. For most target platforms you will receive the following compiled files:

  • For MinGW/Linux/MacOS:

    • a static library file libFORCESNLPsolver.a inside the folder lib_target

    • a shared library file libFORCESNLPsolver.so inside the folder lib_target

  • For Windows:

    • a static library file FORCESNLPsolver_static.lib inside the folder lib_target

    • a dynamic library file FORCESNLPsolver.dll with its definition file for compilation FORCESNLPsolver.lib inside the folder lib_target

You need only one of those to build the solver.

Important

The shared library and the dynamic library if used for building need to be present during runtime as well.

3. Create an interface to call the solver and perform a simulation/test.

You can find a C interface for this example here.

4. Copy in the target platform:

  • The FORCESNLPsolver folder

  • The interface from step 3

5. Compile the solver. The compilation command would be (supposing you are in the directory which contains the FORCESNLPsolver folder):

<Compiler_exec> LowLevel_BasicExample.c <compiled_solver> <additional_libs>

Where:

  • <Compiler_exec> would be the compiler used in the target

  • <compiled_solver> would be one of the compiled files of step 2

  • <additional_libs> would be possible libraries that need to be linked to resolve existing dependencies.

    • For Linux/MacOS it’s usually necessary to link the math library (-lm)

    • For Windows you usually need to link the iphlpapi.lib library (it’s distributed with the Intel Compiler, MinGW as well as MATLAB) and unless you’re using MinGW some additional intel libraries (those are included in the FORCESPRO client under the folder libs_Intel – if missing they are downloaded after code generation)

13.1.3. Y2F interface

The steps to deploy and simulate a FORCESPRO controller on most targets are detailed below.

1. In the Y2F interface example mpc_basic_example.m set the code generation options:

codeoptions.platform = '<platform_name>'; % to specify the platform
codeoptions.printlevel = 0; % optional, on some platforms printing is not supported

and then generate the code for your solver (henceforth referred to as “simpleMPC_solver”, placed in the folder “Y2F”) using the Y2F interface.

2. The Y2F solver is composed of a main solver which calls multiple internal solvers. The file describing the main solver is:

  • simpleMPC_solver.c inside the folder interface

3. The internal solvers are provided as compiled files. For most target platforms you will receive the following compiled files:

  • For MinGW/Linux/MacOS:

    • a static library file libinternal_simpleMPC_solver_1.a inside the folder lib_target

    • a shared library file libinternal_simpleMPC_solver_1.so inside the folder lib_target

  • For Windows:

    • a static library file internal_simpleMPC_solver_1_static.lib inside the folder lib_target

    • a dynamic library file internal_simpleMPC_solver_1.dll with its definition file for compilation internal_simpleMPC_solver_1.lib inside the folder lib_target

You need only one of those to build the solver.

Important

The shared library and the dynamic library if used for building need to be present during runtime as well.

4. Create an interface to call the solver and perform a simulation/test.

You can find a C interface for this example here.

5. Copy in the target platform:

  • The simpleMPC_solver folder

  • The interface from step 4

6. Compile the solver. The compilation command would be (supposing you are in the directory which contains the simpleMPC_solver folder):

<Compiler_exec> Y2F_mpc_basic_example.c simpleMPC_solver/interface/simpleMPC_solver.c <compiled_solver> <additional_libs>

Where:

  • <Compiler_exec> would be the compiler used in the target

  • <compiled_solver> would be one of the compiled files of step 3

  • <additional_libs> would be possible libraries that need to be linked to resolve existing dependencies.

    • For Linux/MacOS it’s usually necessary to link the math library (-lm)

    • For Windows you usually need to link the iphlpapi.lib library (it’s distributed with the Intel Compiler, MinGW as well as MATLAB) and sometimes some additional intel libraries (those are included in the FORCESPRO client under libs_Intel – if missing they are downloaded after code generation)

13.1.4. C interface: memory allocations

The C interface provides some flexibility in how the solver memory is allocated:

  • internal memory: memory buffer is statically allocated inside the solver library

  • external memory: the user is responsible for allocating the memory buffer

The internal memory option is easier to use and thus the default way of interfacing the solver in C. The external memory option is recommended for users who want full control over memory allocation (static or dynamic), or who require multiple memory buffers (e.g. for running a solver in parallel, see External parallelism).

Henceforth, we assume a generated solver named FORCESNLPsolver and demonstrate how to use external and internal memory buffers. We assume the reader is already acquainted with the C interface (see section High-level interface).

You can find the full code for working examples for the internal and external memory interfaces including instructions on how to run them in the examples\StandaloneExecution\C folder that comes with your client.

13.1.4.1. Internal memory

If you don’t need control over the memory buffer, the internal memory C interface is the recommended way of calling a generated solver in C:

/* additional header for internal memory functionality */
#include "FORCESNLPsolver/include/FORCESNLPsolver_memory.h"

/* handle to the solver memory */
FORCESNLPsolver_mem * mem_handle;

/* Get i-th memory buffer */
int i = 0;
mem_handle = FORCESNLPsolver_internal_mem(i);
/* Note: number of available memory buffers is controlled by code option max_num_mem */

/* check that memory is in valid state: */
if (mem_handle == NULL)
{
    /* this happens if i >= max_num_mem */
    return 1;
}
exit_code = FORCESNLPsolver_solve(..., mem_handle, ...)

By default, one memory buffer is available (max_num_mem = 1). For further information on the code option max_num_mem, see Table 13.1.

13.1.4.2. External memory

The following code sample demonstrates the use of external memory with dynamic allocation:

/* memory buffer allocated by the user of type char (representing bytes) */
char * mem;

/* handle to the solver memory */
FORCESNLPsolver_mem * mem_handle;

/* required memory size in bytes */
size_t mem_size = FORCESNLPsolver_get_mem_size();

/* dynamically allocate memory buffer */
mem = malloc(mem_size);

/* cast memory buffer to solver memory
 * note: i can be set to 0 if no thread safety required */
int i = 0;
mem_handle = FORCESNLPsolver_external_mem(mem, i, mem_size);

/* check that memory is in valid state: */
if (mem_handle == NULL)
{
    return 1;
}
exit_code = FORCESNLPsolver_solve(..., mem_handle, ...)

/* free user-allocated memory */
free(mem);

For static allocation, the memory size MEM_SIZE needs to be set at compile time:

/* memory size in bytes s.t. MEM_SIZE >= FORCESNLPsolver_get_mem_size(): */
#define MEM_SIZE 12345

/* statically allocated memory buffer of size MEM_SIZE bytes */
static char mem[MEM_SIZE];
/* cast buffer to solver memory */
mem_handle = FORCESNLPsolver_external_mem(mem, 0, MEM_SIZE);

/* check that memory is in valid state: */
if (mem_handle == NULL)
{
    /* this happens if MEM_SIZE < FORCESNLPsolver_get_mem_size() */
    return 1;
}

The minimum required MEM_SIZE is system and compiler dependent. It can be easily obtained by compiling FORCESNLPsolver\interface\FORCESNLPsolver_get_mem_size.c and by linking against the solver library on the target device. The output of the obtained executable is equal to the value returned by FORCESNLPsolver_get_mem_size().

For memory-critical systems, we advise to disable the internal memory buffer by setting (see also Table 13.1)

codeoptions.max_num_mem = 0;

Otherwise, the solver library still contains one internal memory buffer required by the client to run the solver.

13.1.4.4. Obtaining memory size

The required memory size for a solver can be obtained by calling the two utility functions provided by the solver library:

size_t FORCESNLPsolver_get_mem_size( void );
size_t FORCESNLPsolver_get_const_size( void );

These functions return the memory size for all non-const / const variables, respectively. This information is also printed in the solver output if printlevel = 2. Note that the memory size depends on the system and compiler. The actual memory footprint might be larger as reported by these functions since they account only for the memory to store data (data or bss segment in the binary), and not for the full binary size.

The size returned by FORCESNLPsolver_get_mem_size refers to an internal or external memory buffer. The size returned by FORCESNLPsolver_get_const_size refers to additional constant memory that is not exposed to the user. To obtain the total memory size of a solver that is called in parallel, the size of the memory buffer must be multiplied with the number of threads: total_size = NUM_THREADS * FORCESNLPsolver_get_mem_size() + FORCESNLPsolver_get_const_size().

13.1.5. Compilation with C++

Compiling a FORCESPRO solver with a C++ compiler is straightforward: Simply use the C interface as described in section Section 13.1.1 along with the extern "C" syntax to ensure the C linkage conventions are used.

A complete example of compiling a nonlinear FORCESPRO solver with a C++ compiler using CMake can be found in the examples\StandaloneExecution\Cpp folder that comes with your client. In order to run this example, do the following:

  1. Execute OverheadCrane.m in MATLAB to generate a FORCESPRO solver called CraneSolver (uncomment line 96 to generate a solver for Linux).

  2. Run cmake .

  3. Run cmake --build .

  4. An executable CraneSolver_standalone should be available, feel free to run it.