Get Started with Ampool

Try Ampool developer edition for FREE

Ampool Usage Patterns (Part 2): Application Acceleration

January 10, 2018|by: Milind Bhandarkar

Ampool Usage Patterns (Part 2): Application Acceleration

Almost all applications are powered by one or more persistent data stores. Relational OLTP DBMSs have been most often used as data serving backends. However, in recent years, for large scale-out applications, non-relational (NoSQL) data stores, that sacrifice consistency for availability, are being used for data serving. Most modern mobile or web-based applications’ user experience on the front end is powered by making multiple API calls to the backend services which are in turn backed by multiple data serving platforms.

Previous State

Typical application architectures are composed of multiple tiers (and sometime with load balancers & proxies between these tiers), with applications making API calls to the application servers, and application servers translating these API calls to multiple requests to appropriate data serving systems in the backend, collecting responses from the data serving backends, combining them using business logic, and sending them back to the application, where the application renders it on the front-end. When there is temporal locality (data that is fetched recently, will be fetched again soon), the data serving backends are augmented with a lookaside in-memory caching layer, such as Redis of MemcacheD in order to reduce load on persistent data serving systems.

Challenges & Deficiencies

As number of applications, and number of concurrent users of each application increases, the data serving systems are overwhelmed by the number of concurrent requests, and they tend too spend a lot of time (query latency) waiting for individual requests to complete. As a single application request may result in multiple requests to underlying data stores, the application response is blocked for the last response from the data stores. These data fetch requests can be arbitrarily complex, and may result in sequential scan of all records in the data store, exhibiting a long tail behavior.

We identify the core problem in such systems as inability to efficiently construct a logical view over multiple data stores, serving these views concurrently to multiple applications, allowing these views to be updated, and propagating these view updates to underlying data stores.

Adding in-memory caching layer to each underlying data store take advantage of temporal locality, and speeds up data access from each individual data store, however, the application server still has to execute business logic and combine responses (or issue additional requests), before data is returned to the application. In addition, since the single-node lookaside cache is not fault-tolerant, when the cache goes down, applications will see severely degraded performance. A number of distributed & fault tolerant caching systems, such as Apache Geode, Apache Ignite, and Hazelcast aim to solve the cache availability issues. These distributed caching systems are essentially distributed in-memory hash tables with a primary key, and an arbitrary value, and provide fast key-based lookups on this hash table. However, they do not solve the fundamental issue of serving updateable, materialized view that may require fast short scan-oriented queries.

With Ampool

Above block diagram illustrates the data flow, when Ampool is used as Application Acceleration middleware.

  1. Ampool pre-loads initial datasets
  2. Construct in-memory materialized views, by embedding business logic as stored procedures
  3. Application requests data from View access layer
  4. View access layer parses request, adds metadata, and forwards to Ampool
  5. Ampool checks authorization at view-level, and serves projection/subset of view
  6. View access layer performs additional transformations
  7. Response is sent back to Application
  8. In case of updates, Ampool asynchronously updates the underlying databases

Why Ampool?

Ampool has many features that enable it to be used as multi-tenant application acceleration platform.

Please note that Ampool does not enforce distributed transactional updates across multiple non-collocated tables by default. However by integrating Ampool clients with external transaction managers (such as Apache Omid or Apache Tephra) one can implement transactional behavior (only if needed.)

When to consider Ampool for Application Acceleration?

You should consider Ampool as middleware for application acceleration if:

If you are interested in exploring Ampool, write to us to schedule a demo.

Get Started with Ampool

Try Ampool developer edition for free.