14
TERRACOTTA DB: FASTEST FOR IN-MEMORY DATA The volume of data ingested into our systems is only going to accelerate with the Internet of Things (IoT) and tighter app integration on mobile devices like smartphones. The challenge is not just ingesting that data but using it to deliver meaningful insights to the business and trigger actions with the lowest latency possible. Here’s where Terracotta DB excels. TABLE OF CONTENTS 2 Why Terracotta DB excels 3 A closer look at the architecture 5 Terracotta DB specialist subsystems 5 Getting started with Terracotta DB 6 Why two subsystem APIs? 7 Managing Terracotta DB 7 The power of Ehcache: A closer look 12 TCStore basics 14 Support for Terracotta DB 14 Take the next step 14 Recommended resources NOVEMBER 2017 WHITE PAPER

TERRACOTTA DB: TABLE OF CONTENTS FASTEST FOR IN-MEMORY DATA · FASTEST FOR IN-MEMORY DATA ... 3 A closer look at the architecture ... 7 Managing Terracotta DB 7 The power of Ehcache:

  • Upload
    tranthu

  • View
    218

  • Download
    0

Embed Size (px)

Citation preview

TERRACOTTA DB: FASTEST FOR IN-MEMORY DATA The volume of data ingested into our systems is only going to accelerate with the Internet of Things (IoT) and tighter app integration on mobile devices like smartphones. The challenge is not just ingesting that data but using it to deliver meaningful insights to the business and trigger actions with the lowest latency possible. Here’s where Terracotta DB excels.

TABLE OF CONTENTS

2 Why Terracotta DB excels

3 A closer look at the architecture

5 Terracotta DB specialist subsystems

5 Getting started with Terracotta DB

6 Why two subsystem APIs?

7 Managing Terracotta DB

7 The power of Ehcache: A closer look

12 TCStore basics

14 Support for Terracotta DB

14 Take the next step

14 Recommended resources

NOVEMBER 2017 WHITE PAPER

2

Terracotta DB: Fastest for In-Memory Data

In the database world, Peer-to-Peer In-Memory Data Grids (P2P IMDGs) have proved capable of delivering insights relatively fast. Yet they are difficult to scale, complex to deploy and expensive to maintain. This is why Software AG has launched Terracotta DB, which builds on our own class-leading IMDG, Terracotta BigMemory, to deliver a more powerful and high-performing in-memory persistent database capability. Unlike its predecessor, Terracotta DB uniquely supports hybrid “translytical” processing requirements, providing both caching and operational storage, as well as server-side analytical search and compute—all on a single, scalable platform. A unified architecture combines the power of a NoSQL data store with the performance and resilience of Software AG’s fastest yet in-memory technology. Using a tiered approach to deliver storage and in-memory processing, Terracotta DB meets both the business and technical demands placed on modern applications in a way that scales better and is more performant and cost effective—all deliverable on commodity hardware.

This white paper introduces the key architectural features of Terracotta DB and gives examples of how to use the platform. At the end of this paper, you’ll find recommended resources for further technical reading.

Why Terracotta DB excelsTerracotta DB is an in-memory data management platform designed to meet the demands of the largest operational and analytical hybrids workloads at terabyte scale. This next-generation in-memory data management platform builds on the experience gained in more than 2 million enterprise deployments of Terracotta technology in 190 countries and at 70% of Global 1000 companies.

Operational and analytical database capabilities, together on a single platform

Data platform trends have evolved greatly from the relational database model up to today’s in-memory data management solutions. New developments have given enterprises great performance improvements for distributed computing algorithms or scenarios where a grid-based architecture is needed.

Terracotta DB overcomes a number of limitations of classic IMDGs. It uniquely offers a true combination of distributed scalability, 99.999% availability and blisteringly high performance—all combined with data persistency and processing suited to enterprise-level operational and analytical requirements.

Classic IMDGs struggle to scale up to the meet the performance needs of modern workloads, because doing so creates throughput bottlenecks caused by unnecessary inter-node traffic as the volume of nodes rises. There are also latency issues as a result of Java® garbage collection that cannot be avoided in those environments.

The very nature of P2P IMDGs also means they are complex to deploy and manage. The sheer volume of nodes needed to achieve equivalent performance to Terracotta puts this in perspective. Heavy traffic levels and poor memory use mean that up to 100x more nodes/Java Virtual Machines (JVM®) are needed to manage the same data volumes. This scale of nodes has a serious impact on the total cost of ownership for an infrastructure built on classic P2P IMDG technology.

3

Terracotta DB: Fastest for In-Memory Data

For developers, P2P IMDGs can also slow the amount of time it takes to get new applications into production, as code may need rewriting to run on proprietary architectures, or rely on specialized tools, hardware and infrastructure.

Taking in-memory data management to a higher level, Terracotta DB uses Java as its primary language to execute code and access the sub-system APIs. It employs a heavily typed database approach, but with a loose schema. That gives developers huge flexibility when it comes to defining their record and database structure while benefitting from the consistency and reliability of defined data types. The storage API’s “loose schema” approach provides a powerful hybrid of schema-on-write and schema-on-read.

Modern systems, which have sophisticated needs and must deal with multiple and/or quickly evolving data sources, highly favor schema-on-read semantics. So most NoSQL offerings take this approach. Use cases such as those in the IoT spaces also favor schema-on-read (weak or no enforced schema). As an example, different brands/models of sensors may provide data for the same thing (e.g., “humidity”) in various different data types and formats yet the flood of data needs to be consumed and stored quickly without having to apply mappings or transformations that would hamper performance at scale.

Terracotta DB addresses the challenges that modern businesses face in working with real-time, high-volume, high-velocity data by offering:

• Predictable, low latency at scale – Terracotta DB easily scales with predictable low latency to terabyte database scales. It uses the maximum amount of memory available and off-heap storage to give near-instant access to data. Low latency is not only maintained when scaling up, it is predictable when scaling out across multiple nodes.

• Enterprise-grade capabilities – For mission-critical applications that insist on ultra-high availability, Terracotta DB is a great choice. It achieves 99.999% availability with full fault tolerance and exceptionally fast restarts. SLA-backed 24/7 support also ensures IT departments have full access to the expertise needed maintain and expand their use of Terracotta DB.

• Ease of integration and operation – Terracotta DB uses industry-standard APIs and will run with any Java application, minimizing the need for code rewrites. It can be snapped in with just two configuration changes. All in all, this means deployment to production systems can typically be achieved in less than three months.

• Low total cost of ownership – With its use of high-density, commodity hardware and rapid deployment times, Terracotta DB ensures organizations can scale up and out at speed, with fewer servers for a given workload, and simpler management regimes.

A closer look at the architectureTerracotta DB is a comprehensive, distributed in-memory data management solution, which caters to both caching and operational storage use cases, and it enables both transactional and analytical processing. Terracotta DB has one of the most powerful query and computation capabilities in its class, leveraging native JDK features, such as Java Streams, collections and functions.

At its heart, Terracotta DB is built on already class-leading foundation technology. Software AG’s Terracotta BigMemory has long been the world’s premier distributed in-memory data grid platform, capable of delivering extremely low, predictable latency at any scale. It has been the simplest, most cost-effective way to get lightning-fast, predictable access to big data—cutting processing time from minutes to seconds or even less. BigMemory has proven to make data available to applications in real time, and it can easily be scaled up and out.

4

Terracotta DB: Fastest for In-Memory Data

Scale up and scale out to deliver both high performance and high availability transparently to your applications.

Terracotta DB gives even better performance, and it is just as easy to scale an implementation up and out, even to the cloud. There are several key areas where it excels over other in-memory data management architectures:

Rich data storage & access

Terracotta DB can hold any type of data and has several tiers of data storage available that can be configured to give the best performance for a given application’s requirements (external data sources, disk space, and in-memory storage split between heap and off-heap).

Unlimited scale & guaranteed latency

There is no internal limit on how big Terracotta BigMemory can scale. It has been shown to successfully manage nearly 6TB of data in memory on a single server. In tests, scaling from 2TB to 5.5TB on a single server, Terracotta DB maintains a flat level of extremely low latency, demonstrating guaranteed low latency regardless of the scale of an implementation at the server level. With the option to scale out, there is no practical limit to how many servers, mirrors, stripes and clusters can be added to provide the capacity and performance required.

Built-in high availability

Terracotta DB utilizes various techniques to guarantee high availability and fast disaster recovery. Implemented on commodity hardware, each Terracotta Server Array can be comprised of multiple stripes and clusters in a distributed topology where each active server can have one or more mirrors running in parallel ready to provide instant failover. Data updates can also be persisted to disk in our “Fast-Restart Store” for the speediest possible restoration of service in worst case scenarios. Ease of development

Development is simplified in Terracotta DB through the use of APIs. In the case of caching, the latest industry standard Ehcache API (version 3) is provided, which is one of the most widely used open source Java caching APIs. Because Ehcache is included in a number of frameworks, it is easy to take advantage of its features either for rapid distributed deployment or standalone use.

Ease of management

Along with the APIs, Terracotta DB is built upon the next generation of Terracotta Server. The APIs leverage the power of this new distributed computing platform with reduced complexity and enhanced performance and scalability. Similarly the shared platform provides a common web-based monitoring and management application with Terracotta Management Console.

5

Terracotta DB: Fastest for In-Memory Data

Community driven

There are more than 2.5 million active developers contributing to forums such as Google Groups™ discussion forums, Github® and Software AG TECHcommunity, all supporting each other and proposing updates for the product.

Terracotta DB specialist subsystemsTerracotta DB has two specialist sub-systems for storage and caching called TCStore and Ehcache. TCStore is responsible for operational store and compute functionality, while Ehcache looks after all caching functionality. Each sub-system has its functionality exposed through a dedicated API.

Both storage and caching use cases supported by dedicated APIs running on a common platform

Both sub-systems are backed by the Terracotta Server, which provides a common platform for distributed in-memory data storage with scale out, scale up and high availability features, built on commodity hardware. A cluster of Terracotta Servers configured to work together is referred to as a Terracotta Server Array (TSA). A TSA can vary from a single server, to a basic two-server tandem for High Availability (HA), to a multi-server array, providing configurable scale, high performance and deep failover coverage.

The main features of the Terracotta Server include:

• Distributed in-memory data management that can manage 10-100x more data in memory than standard IMDGs

• Scalability without complexity through simple configuration and deployment options for scaling-up and/or scale-out to meet growing demand and facilitate capacity planning

• High availability with instant failover for continuous uptime and services

• Configurable health monitoring for both client and server health

• Persistent application state achieved through automatic permanent storage of all current shared in-memory data with ultra-fast recovery upon server restarts

• Automatic node reconnection where temporarily disconnected server instances and clients can re-join the cluster without operator intervention

Getting started with Terracotta DBIf you want to start exploring Terracotta DB straight away, it can be downloaded from the Software AG website with a 90-day trial license, free of charge. See the Recommended Resources in this paper for links to downloads and installation guides.

6

Terracotta DB: Fastest for In-Memory Data

The Terracotta DB distribution kit is provided as an archive containing Java® Archive (JAR) files, command line scripts and other associated files. The installation procedure simply consists of downloading the archive file from Software AG’s Software Download Center (SDC) and expanding the archive file to a suitable disk location in your working environment.

Why two subsystem APIs?Two distinct APIs in Terracotta DB, Ehcache and TCStore, help simplify development and separation of data management based on the needs of specific use cases. There is a strict separation between the Ehcache API and the TCStore API, even when used with the same Terracotta Server. Information placed using the TCStore API cannot be retrieved using the Ehcache API and vice versa.

Terracotta Server acts as the core distributed storage platform and is common for data placed using both the Ehcache API and the TCStore API, yet cached data is managed separately from stored data.

Ehcache API

Ehcache is designed to give you access to the data you need in large quantities at maximum speed using a simple key/value look-up. The API scales from in-process caching, all the way to mixed in-process/out-of-process deployments with terabyte-sized caches. It allows you to offload work from the system of record because it continually works to keep hotter, fresher data entries available at the fastest in-memory speeds. In addition, it is optimized for application caching.

Ehcache is the most widely-used Java-based cache, and the version included with Terracotta DB is the commercial distribution. It is a proven technology that is robust and fully featured as well as integrates with other popular libraries and frameworks.

The API is fast and has a very small footprint to keep your apps as light as possible, with only one dependency for use, which is the SLF4J Simple Logging Façade for Java. It is also a standards-centered API with support for the JSR-107 Cache Java Temporary Caching API, which means developers can use it as a JCache provider, rather than using the Ehcache API. Ehcache also supports JTA and is a fully XA-compliant resource participating in the transaction, with two-phase commit and recovery.

The largest implementations of Ehcache are multi-terabyte data stores and can include hundreds of nodes. Using off-heap storage Ehcache has been tested to store 6TB of data in a single process instance.

TCStore API

TCStore API is an interface for distributed in-memory data storage and computation, which has powerful ties to JDK features related to streams, collections and functions. It is for use with data that you always expect to have access to, so unlike Ehcache, it will not evict records to make way for the latest data that has been ingested into Terracotta DB. TCStore is the system of record, because data held within it is persistent, structured and strongly typed.

Reliable searches and queries can be conducted through the TCStore API at in-memory speeds, using either key look-up or by searching on other fields in the record structure.

Under the hood, TCStore it is powered by a completely new and powerful storage engine, which is an “aggregate-oriented, key-value, wide-column store” built upon a very high performance and highly scalable architecture. Some of the key high-level features of this API include:

• Loose schema – Modeling of data with structured and typed aggregate values within records

• Storage – In-memory storage with optional persistence to disk and ultra-fast recovery, Java-based key/value store optimized for data storage workloads and secondary in-memory indexes to speed-up search and compute

7

Terracotta DB: Fastest for In-Memory Data

• Distributed store – Supports various scale-out and HA deployment configurations

• Flexible – TCStore has fine-granular configuration over availability, consistency and durability of data

• Data analysis – Search and analysis capabilities that work naturally with Java 8 technologies, Java stream API to filter functionality, aggregate and map data integration. TCStore also includes a domain specific language with a library of pre-implemented lambda functions enabling server-side execution of queries

Managing Terracotta DBTerracotta DB also includes a powerful management and monitoring application called the Terracotta Management Console (TMC). It is a web-based administration and monitoring application with many capabilities and advantages, including:

Manage and monitor Terracotta DB deployments centrally using Terracotta Management Console

• Feature-rich and easy-to-use interface

• Remote management capabilities requiring only a web browser and network connection

• Capabilities to visualize cluster topologies, monitor health and manage Terracotta Servers

• Aggregated performance and usage statistics from multiple Terracotta nodes

• Cross-platform deployment

• Flexible deployment model, which can plug into both development environments and secure production architectures

• Multi-level security architecture, with end-to-end SSL secure connections available

• Role-based authentication and authorization

• Support for LDAP directories and Microsoft® Active Directory®

More detailed information on the features of the TMC and user guides can be found at links at the end of this document.

The power of Ehcache: A closer lookEhcache is extremely powerful, but that does not mean it is complicated to use. As outlined, Ehcache is used to hold data that has recently been ingested into Terracotta DB or will be used, imminently by an application or process. Data in the Ehcache should eventually be refreshed and will be expected to expire from the cache at a defined point.

8

Terracotta DB: Fastest for In-Memory Data

Keeping data fresh

Data freshness describes how up-to-date a copy of data (e.g., in a cache) is compared to the source version of the data, e.g., in the System of Record (SoR). A stale copy is considered to be out of sync (or likely to be out of sync) with the SoR.

Databases (and other SoRs) weren’t built with caching outside of the database in mind and, therefore, don’t normally come with any default mechanism for notifying external processes when data has been updated or modified. Thus, external components that have loaded data from the SoR have no direct way of ensuring that data is not stale.

Ehcache can help you reduce the likelihood that stale data is used by your application by expiring cache entries after some amount of configured time. Once expired, the entry is automatically removed from the cache.

For instance, the cache could be configured to expire entries five seconds after they are put into the cache, which is a Time-To-Live (TTL) setting. Or it could be configured to expire entries 17 seconds after the last time the entry was retrieved from the cache, which is a Time-To-Idle (TTI) setting. This is defined by you depending on the specific business and technical needs of your cache.

Understanding storage tiers

You can configure Ehcache to use various data storage areas. When a cache is configured to use more than one storage area, those areas are arranged and managed as tiers. They are organized in a hierarchy, with the lowest tier being called the authority tier and the others being part of the caching tier. The caching tier can itself be composed of more than one storage area. The hottest data is kept in the caching tier, which is typically less abundant but faster than the authority tier. All the data is kept in the authority tier, which is slower but more abundant.

Tiered storage maximizes performance by ensuring the most frequently used data is kept “closest” to the application using the lowest-latency storage available.

Data stores supported by Ehcache include:

• On-heap store – Utilizes Java’s on-heap RAM memory to store cache entries. This tier utilizes the same heap memory as your Java application, all of which must be scanned by the JVM garbage collector. The more heap space your JVM utilizes the more your application performance will be impacted by garbage collection pauses. This store is extremely fast, but is typically your most limited storage resource.

• Off-heap store – Limited in size only by available RAM. Not subject to Java garbage collection. Quite fast, yet slower than the on-heap store because data must be moved to and from the JVM heap as it is stored and re-accessed.

9

Terracotta DB: Fastest for In-Memory Data

• Disk store – Utilizes a disk (file system) to store cache entries. This type of storage resource is typically very abundant but much slower than the RAM-based stores. As for all applications using disk storage, it is recommended to use a fast and dedicated disk to optimize the throughput.

• Clustered storage – This data store is a cache on a remote server. The remote server may optionally have a failover server providing improved high availability. Since clustered storage comes with performance penalties due to such factors as network latency as well as for establishing client/server consistency, this tier, by nature, is slower than local off-heap storage.

Caching architecture

Caching topology basics

There are two basic topologies: standalone and distributed/clustered. Both suit different applications, but it is common for many production applications to be deployed in clusters of multiple instances for availability and scalability.

• Standalone - The data set is held in the application node. Any other application nodes are independent with no communication between them. If a standalone topology is used where there are multiple application nodes running the same application, then their caches are completely independent.

• Distributed/clustered - The data is held in a remote server (or array of servers) with a subset of hot data held in each application node. This topology offers a selection of consistency options. A distributed topology is the recommended approach in a clustered or scaled-out application environment. It provides the best combination of performance, availability and scalability.

How distributed caching works

10

Terracotta DB: Fastest for In-Memory Data

There are key challenges when using distributed or clustered topologies as application clusters start exhibiting undesirable behaviors, including cache drift and database bottlenecks. Ehcache overcomes these through its distributed cache ensuring the consistency and synchronization of data across the cluster.

• Cache drift - If each application instance maintains its own cache, updates made to one cache will not appear in the other instances. A distributed cache ensures that all of the cache instances are kept in sync with each other.

• Database bottlenecks - In a single-instance application, a cache effectively shields a database from the overhead of redundant queries. However, in a distributed application environment, each instance must load and keep its own cache fresh. The overhead of loading and refreshing multiple caches leads to database bottlenecks as more application instances are added. A distributed cache eliminates the per-instance overhead of loading and refreshing multiple caches from a database.

Coding a cache with Java and Ehcache

Here is a straightforward example of how to code the lifecycle of a cache in Java. More detailed coding tutorials and examples can be found via links in Recommended Resources at the end of this paper.

Java configuration is most easily achieved through the use of builders that offer a fluent API.

The canonical way of dealing with a Cache is through a CacheManager, which is a primary class in Ehcache and provides all necessary functionality to manage caches and associated services. In this example, we are using it to create, use and finally close a cache:

Code sample - working with caches.

The following list items refer to the commented numbers in the code.

<1> Using the Builders

The static method org.ehcache.config.builders.CacheManagerBuilder.newCacheManagerBuilder returns a new org.ehcache.config.builders.CacheManagerBuilder instance.

<2> Declare a cache configuration

Use the builder to define a Cache with alias “preConfigured”. This cache will be created when cacheManager.build() is invoked on the actual CacheManager instance. The first String argument is the cache alias, which is used to retrieve the cache from the CacheManager. The second argument, org.ehcache.config.CacheConfiguration, is used to configure the Cache.

In this case, the static newCacheConfigurationBuilder() method is used on org.ehcache.config.builders.CacheConfigurationBuilder to create a default configuration.

11

Terracotta DB: Fastest for In-Memory Data

<3> Instantiate a CacheManager

Invoking build() returns a fully instantiated, but uninitialized, CacheManager ready to use.

<4> Initialize the CacheManager

Before using the CacheManager it needs to be initialized, which can be done in one of two ways:

• Calling CacheManager.init() on the CacheManager instance, or

• Calling the CacheManagerBuilder.build(boolean init) method with the boolean parameter set to true.

<5> Retrieving the preConfigured Cache and Type-Safety

A cache is retrieved by passing its alias, key type and value type to the CacheManager. For instance, to obtain the cache declared in step 2 you need its alias=preConfigured, keyType=Long.class and valueType=String.class.

Asking for both key and value types to be passed in ensures type-safety. Should these differ from the ones expected, the CacheManager throws a ClassCastException early in the application’s lifecycle. This also guards the Cache from being polluted by random types.

<6> Create a new Cache

The CacheManager can also be used to create new Cache as needed.

Just as in step 2, it requires passing an alias as well as a CacheConfiguration.

The instantiated and fully initialized Cache added will be returned and/or accessed through the CacheManager.getCache API.

<7> Store data

The newly added Cache can now be used to store entries, which are comprised of key value pairs. The put method’s first parameter is the key and the second parameter is the value. Remember the key and value types must be the same types as those defined in the CacheConfiguration. Additionally, the key must be unique and is only associated with one value.

<8> Retrieve data

A value is retrieved from a cache by calling the cache.get(key) method. It only takes one parameter which is the key, and returns the value associated with that key. If there is no value associated with that key then null is returned.

<9> Remove and close a given Cache

With CacheManager.removeCache(String) any given Cache can be removed.

The CacheManager will not only remove its reference to the Cache, but will also close it. The Cache releases all locally held transient resources (such as memory). References to this Cache become unusable.

<10> Close all Cache instances

In order to release all transient resources (memory, threads, ...) that a CacheManager provides to its managed Cache instances, CacheManager.close() needs to be invoked. This closes all Cache instances known at the time.

12

Terracotta DB: Fastest for In-Memory Data

TCStore basicsUsing popular/industry definitions, TCStore is an “aggregate-oriented, key-value, wide-column NoSQL store.” The individual records stored within TCStore contain cells with type information enabling the store to make use of the data it holds. However, like other NoSQL stores, TCStore is schema-less in its core design, allowing individual records to contain identical sets of cells, a subset of common cells, or a completely different set of cells.

As such, and like other NoSQL stores, TCStore is not intended for usage patterns that are traditional to tabular data or RDBMSs. Data contained within TCStore are not and cannot be directly relational, and care should be taken to use modeling techniques (such as de-normalization of data) other than those commonly used with RDBMSs.

TCStore organizes its data into collections, so-called datasets:

• Each Dataset is comprised of zero or more records

• Each Record has a key, unique within the dataset, and zero or more cells

• Each Cell has a name, unique within the record; a declared type; and a non-null value

While records within a dataset must have a uniform key type, records are not required to have uniform content. Each record may be comprised of cells having different names and/or different types. To that end, each record and cell is self-describing and will be understood by the storage engine.

TCStore Data Storage Model - typed data

Since records and cells are self-describing, this means that records and cells can be manipulated efficiently in ways that you may not have considered in previous database deployments. For example, there is no need to retrieve an entire record, when all you want is a single cell. Equally, cells can be directly changed, and nearly every cell type is searchable, and can have indexes applied to improve performance.

13

Terracotta DB: Fastest for In-Memory Data

Data can be indexed across entire distributed stores for real time search and processing

Datasets

A Dataset is a collection of Record instances. Each Record instance is uniquely identified by a key within the Dataset. The key type is declared when the Dataset is created. Aside from the Record key type, a Dataset has no predefined schema.

Records

A Record is a key plus an unordered set of “name to (typed) value” pairs representing a natural aggregate of your domain model. Each Record within a Dataset can hold completely different sets of cells, as there is no schema to conform to. Record instances held within a given Dataset are immutable. Changing one or multiple values on a Record creates a new instance of that Record which replaces the old instance.

Record represents the only atomically alterable type in TCStore. You can mutate as many Cells of a given Record as you wish as an atomic action, but not across multiple Record instances. This includes adding and removing cells at the same time as changing the values of existing cells.

Cell definitions and values

A Record contains zero or more Cell instances, each derived from a CellDefinition. A CellDefinition is a “type/name” pair (e.g. String firstName). From a CellDefinition you can create a Cell (e.g. firstName = “Alex”, where “Alex” is of type String) to store in a Record. The name of the Cell is the name from the CellDefinition used to create the cell; the value of the Cell is of the type specified in the CellDefinition used to create the cell.

A code example

While this document is only meant to serve as an introduction to Terracotta DB, here is a simple code example of how we take the principles outlined above and create a CellDefinition that would describe cells used in a dataset. The following example shows various ways of specifying a CellDefinition for a cell holding a String value.

ABOUT SOFTWARE AGSoftware AG (Frankfurt TecDAX: SOW) helps companies with their digital transformation. With Software AG’s Digital Business Platform, companies can better interact with their customers and bring them on new ‘digital’ journeys, promote unique value propositions, and create new business opportunities. In the Internet of Things (IoT) market, Software AG enables enterprises to integrate, connect and manage IoT components as well as analyze data and predict future events based on Artificial Intelligence (AI). The Digital Business Platform is built on decades of uncompromising software development, IT experience and technological leadership. Software AG has more than 4,500 employees, is active in 70 countries and had revenues of €872 million in 2016. To learn more, visit www.softwareag.com.

© 2017 Software AG. All rights reserved. Software AG and all Software AG products are either trademarks or registered trademarks of Software AG. Other product and company names mentioned herein may be the trademarks of their respective owners.

Terracotta DB: Fastest for In-Memory Data

1. A CellDefinition supporting a value type of String can be using the CellDefinition.defineString() method.

2. A StringCellDefinition is also a CellDefinition<String>.

3. In addition to the CellDefinition.defineString() method, the CellDefinition.define() may be used to create a CellDefinition for a String.

If you want to explore the TCStore API in more detail, please refer to Recommended Resources at the end of this paper for links to API references and other documents that contain lots of coding examples and tutorials.

Support for Terracotta DBHelping enterprises succeed with our technology is the top priority at Software AG. That’s why our Global Support Services assure your success from Day One of using our software support services. Our support package—Enterprise Active Support—delivers the fastest, most agile and proactive support for your mission-critical Terracotta DB implementations.

Count on our team to:

Secure & accelerate your success

• 24/7 phone support for ALL incidents

• 24/7 access to customer portal, including Empower and TECHcommunity resources

• Multi-regional support to assist your distributed development and operations teams

• Unlimited Authorized Technical Contacts—local support experts around the globe—to open new incidents

Guide your success

• Best practices based on years of experience

• Technical documents on topics like platform sizing and performance tuning and more

Take the next stepIf you’d like to know how Terracotta DB could improve your application performance or lower the costs and complexity of your infrastructure, talk to your Software AG representative or visit www.softwareag.com.

Recommended resources• Find out more about our In-memory solutions at

http://www2.softwareag.com/corporate/products/terracotta/default.aspx

• For Terracotta community resources, visit http://www.terracotta.org/

• Download trial software from http://www.terracotta.org/downloads/

• Documentation is available at http://www.terracotta.org/documentation/

SAG_Terracotta_DB_16PG_WP_Oct17