How AppsFlyer modernized their interactive workload by shifting to Amazon Athena and saved 80% of prices

This publish is co-written with Nofar Diamant and Matan Safri from AppsFlyer. 

AppsFlyer develops a number one measurement answer centered on privateness, which permits entrepreneurs to gauge the effectiveness of their advertising actions and integrates them with the broader advertising world, managing an enormous quantity of 100 billion occasions each day. AppsFlyer empowers digital entrepreneurs to exactly establish and allocate credit score to the assorted client interactions that lead as much as an app set up, using in-depth analytics.

A part of AppsFlyer’s providing is the Audiences Segmentation product, which permits app homeowners to exactly goal and reengage customers primarily based on their habits and demographics. This features a function that gives real-time estimation of viewers sizes inside particular consumer segments, known as the Estimation function.

To supply customers with real-time estimation of viewers measurement, the AppsFlyer crew initially used Apache HBase, an open-source distributed database. Nonetheless, because the workload grew to 23 TB, the HBase structure wanted to be revisited to fulfill service degree agreements (SLAs) for response time and reliability.

This publish explores how AppsFlyer modernized their Audiences Segmentation product through the use of Amazon Athena. Athena is a robust and versatile serverless question service offered by AWS. It’s designed to make it simple for customers to investigate knowledge saved in Amazon Easy Storage Service (Amazon S3) utilizing normal SQL queries.

We dive into the assorted optimization methods AppsFlyer employed, corresponding to partition projection, sorting, parallel question runs, and the usage of question end result reuse. We share the challenges the crew confronted and the methods they adopted to unlock the true potential of Athena in a use case with low-latency necessities. Moreover, we focus on the thorough testing, monitoring, and rollout course of that resulted in a profitable transition to the brand new Athena structure.

Audiences Segmentation legacy structure and modernization drivers

Viewers segmentation includes defining focused audiences in AppsFlyer’s UI, represented by a directed tree construction with set operations and atomic standards as nodes and leaves, respectively.

The next diagram reveals an instance of viewers segmentation on the AppsFlyer Audiences administration console and its translation to the tree construction described, with the 2 atomic standards because the leaves and the set operation between them because the node.

Audience segmentation tool and its translation to a tree structure

To supply customers with real-time estimation of viewers measurement, the AppsFlyer crew used a framework known as Theta Sketches, which is an environment friendly knowledge construction for counting distinct parts. These sketches improve scalability and analytical capabilities. These sketches had been initially saved within the HBase database.

HBase is an open supply, distributed, columnar database, designed to deal with giant volumes of knowledge throughout commodity {hardware} with horizontal scalability.

Authentic knowledge construction

On this publish, we give attention to the occasions desk, the biggest desk initially saved in HBase. The desk had the schema date | app-id | event-name | event-value | sketch and was partitioned by date and app-id.

The next diagram showcases the high-level unique structure of the AppsFlyer Estimations system.

High level architecture of the Estimations system

The structure featured an Airflow ETL course of that initiates jobs to create sketch recordsdata from the supply dataset, adopted by the importation of those recordsdata into HBase. Customers may then use an API service to question HBase and retrieve estimations of consumer counts based on the viewers phase standards arrange within the UI.

To study extra in regards to the earlier HBase structure, see Utilized Chance – Counting Massive Set of Unstructured Occasions with Theta Sketches.

Over time, the workload exceeded the dimensions for which HBase implementation was initially designed, reaching a storage measurement of 23 TB. It grew to become obvious that to be able to meet AppsFlyer’s SLA for response time and reliability, the HBase structure wanted to be revisited.

As beforehand talked about, the main target of the use case entailed each day interactions by clients with the UI, necessitating adherence to a UI normal SLA that gives fast response instances and the aptitude to deal with a considerable variety of each day requests, whereas accommodating the present knowledge quantity and potential future growth.

Moreover, because of the excessive value related to working and sustaining HBase, the intention was to search out an alternate that’s managed, simple, and cost-effective, that wouldn’t considerably complicate the prevailing system structure.

Following thorough crew discussions and consultations with the AWS specialists, the crew concluded {that a} answer utilizing Amazon S3 and Athena stood out as probably the most cost-effective and easy selection. The first concern was associated to question latency, and the crew was notably cautious to keep away from any hostile results on the general buyer expertise.

The next diagram illustrates the brand new structure utilizing Athena. Discover that import-..-sketches-to-hbase and HBase had been omitted, and Athena was added to question knowledge in Amazon S3.

High level architecture of the Estimations system using Athena

Schema design and partition projection for efficiency enhancement

On this part, we focus on the method of schema design within the new structure and totally different efficiency optimization strategies that the crew used together with partition projection.

Merging knowledge for partition discount

In an effort to consider if Athena can be utilized to assist Audiences Segmentation, an preliminary proof of idea was performed. The scope was restricted to occasions arriving from three app-ids (approximated 3 GB of knowledge) partitioned by app-id and by date, utilizing the identical partitioning schema that was used within the HBase implementation. Because the crew scaled as much as embrace the whole dataset with 10,000 app-ids for a 1-month time vary (reaching an approximated 150 GB of knowledge), the crew began to see extra gradual queries, particularly for queries that spanned over important time ranges. The crew dived deep and found that Athena spent important time on the question starting stage on account of a lot of partitions (7.3 million) that it loaded from the AWS Glue Knowledge Catalog (for extra details about utilizing Athena with AWS Glue, see Integration with AWS Glue).

This led the crew to look at partition indexing. Athena partition indexes present a solution to create metadata indexes on partition columns, permitting Athena to prune the information scan on the partition degree, which may cut back the quantity of knowledge that must be learn from Amazon S3. Partition indexing shortened the time of partition discovery within the question starting stage, however the enchancment wasn’t substantial sufficient to fulfill the required question latency SLA.

As a substitute for partition indexing, the crew evaluated a technique to scale back partition quantity by lowering knowledge granularity from each day to month-to-month. This methodology consolidated each day knowledge into month-to-month aggregates by merging day-level sketches into month-to-month composite sketches utilizing the Theta Sketches union functionality. For instance, taking a knowledge of a month vary, as an alternative of getting 30 rows of knowledge per 30 days, the crew united these rows right into a single row, successfully slashing the row depend by 97%.

This methodology tremendously decreased the time wanted for the partition discovery part by 30%, which initially required roughly 10–15 seconds, and it additionally diminished the quantity of knowledge that needed to be scanned. Nonetheless, the anticipated latency objectives primarily based on the UI’s responsiveness requirements had been nonetheless not best.

Moreover, the merging course of inadvertently compromised the precision of the information, resulting in the exploration of different options.

Partition projection as an enhancement multiplier

At this level, the crew determined to discover partition projection in Athena.

Partition projection in Athena permits you to enhance question effectivity by projecting the metadata of your partitions. It just about generates and discovers partitions as wanted with out the necessity for the partitions to be explicitly outlined within the database catalog beforehand.

This function is especially helpful when coping with giant numbers of partitions, or when partitions are created quickly, as within the case of streaming knowledge.

As we defined earlier, on this explicit use case, every leaf is an entry sample being translated into a question that should comprise date vary, app-id, and event-name. This led the crew to outline the projection columns through the use of date kind for the date vary and injected kind for app-id and event-name.

Fairly than scanning and loading all partition metadata from the catalog, Athena can generate the partitions to question utilizing configured guidelines and values from the question. This avoids the necessity to load and filter partitions from the catalog by producing them within the second.

The projection course of helped keep away from efficiency points attributable to a excessive variety of partitions, eliminating the latency from partition discovery throughout question runs.

As a result of partition projection eradicated the dependency between variety of partitions and question runtime, the crew may experiment with a further partition: event-name. Partitioning by three columns (date, app-id, and event-name) diminished the quantity of scanned knowledge, leading to a ten% enchancment in question efficiency in comparison with the efficiency utilizing partition projection with knowledge partitioned solely by date and app-id.

The next diagram illustrates the high-level knowledge circulation of sketch file creation. Specializing in the sketch writing course of (write-events-estimation-sketches) into Amazon S3 with three partition fields prompted the method to run twice as lengthy in comparison with the unique structure, on account of an elevated variety of sketch recordsdata (writing 20 instances extra sketch recordsdata to Amazon S3).

High level data flow of Sketch file creation

This prompted the crew to drop the event-name partition and compromise on two partitions: date and app-id, ensuing within the following partition construction:

s3://bucket/table_root/date=${day}/app_id=${app_id}

Utilizing Parquet file format

Within the new structure, the crew used Parquet file format. Apache Parquet is an open supply, column-oriented knowledge file format designed for environment friendly knowledge storage and retrieval. Every Parquet file comprises metadata corresponding to minimal and most worth of columns that permits the question engine to skip loading unneeded knowledge. This optimization reduces the quantity of knowledge that must be scanned, as a result of Athena can skip or rapidly navigate by sections of the Parquet file which can be irrelevant to the question. Because of this, question efficiency improves considerably.

Parquet is especially efficient when querying sorted fields, as a result of it permits Athena to facilitate predicate pushdown optimization and rapidly establish and entry the related knowledge segments. To study extra about this functionality in Parquet file format, see Understanding columnar storage codecs.

Recognizing this benefit, the crew determined to type by event-name to reinforce question efficiency, attaining a ten% enchancment in comparison with non-sorted knowledge. Initially, they tried partitioning by event-name to optimize efficiency, however this method elevated writing time to Amazon S3. Sorting demonstrated question time enchancment with out the ingestion overhead.

Question optimization and parallel queries

The crew found that efficiency might be improved additional by operating parallel queries. As a substitute of a single question over an extended window of time, a number of queries had been run over shorter home windows. Regardless that this elevated the complexity of the answer, it improved efficiency by about 20% on common.

As an illustration, think about a situation the place a consumer requests the estimated measurement of app com.demo and occasion af_purchase between April 2024 and finish of June 2024 (as illustrated earlier, the segmentation is outlined by the consumer after which translated to an atomic leaf, which is then damaged all the way down to a number of queries relying on the date vary). The next diagram illustrates the method of breaking down the preliminary 3-month question into two separate as much as 60-day queries, operating them concurrently after which merging the outcomes.

Splitting query by date range

Lowering outcomes set measurement

In analyzing efficiency bottlenecks, inspecting the differing types and properties of the queries, and analyzing the totally different phases of the question run, it grew to become clear that particular queries had been gradual in fetching question outcomes. This downside wasn’t rooted within the precise question run, however in knowledge switch from Amazon S3 on the GetQueryResults part, on account of question outcomes containing a lot of rows (a single end result can comprise tens of millions of rows).

The preliminary method of dealing with a number of key-value permutations in a single sketch inflated the variety of rows significantly. To beat this, the crew launched a brand new event-attr-key discipline to separate sketches into distinct key-value pairs.

The ultimate schema regarded as follows:

date | app-id | event-name | event-attr-key | event-attr-value | sketch

This refactoring resulted in a drastic discount of end result rows, which considerably expedited the GetQueryResults course of, markedly bettering total question runtime by 90%.

Athena question outcomes reuse

To deal with a typical use case within the Audiences Segmentation GUI the place customers usually make delicate changes to their queries, corresponding to adjusting filters or barely altering time home windows, the crew used the Athena question outcomes reuse function. This function improves question efficiency and reduces prices by caching and reusing the outcomes of earlier queries. This function performs a pivotal function, notably when considering the current enhancements involving the splitting of date ranges. The flexibility to reuse and swiftly retrieve outcomes implies that these minor—but frequent—modifications not require a full question reprocessing.

Because of this, the latency of repeated question runs was diminished by as much as 80%, enhancing the consumer expertise by offering quicker insights. This optimization not solely accelerates knowledge retrieval but in addition considerably reduces prices as a result of there’s no have to rescan knowledge for each minor change.

Answer rollout: Testing and monitoring

On this part, we focus on the method of rolling out the brand new structure, together with testing and monitoring.

Fixing Amazon S3 slowdown errors

Through the answer testing part, the crew developed an automation course of designed to evaluate the totally different audiences inside the system, utilizing the information organized inside the newly applied schema. The methodology concerned a comparative evaluation of outcomes obtained from HBase in opposition to these derived from Athena.

Whereas operating these checks, the crew examined the accuracy of the estimations retrieved and in addition the latency change.

On this testing part, the crew encountered some failures when operating many concurrent queries directly. These failures had been attributable to Amazon S3 throttling on account of too many GET requests to the identical prefix produced by concurrent Athena queries.

In an effort to deal with the throttling (slowdown errors), the crew added a retry mechanism for question runs with an exponential back-off technique (wait time will increase exponentially with a random offset to stop concurrent retries).

Rollout preparations

At first, the crew initiated a 1-month backfilling course of as a cost-conscious method, prioritizing accuracy validation earlier than committing to a complete 2-year backfill.

The backfilling course of included operating the Spark job (write-events-estimation-sketches) within the desired time vary. The job learn from the information warehouse, created sketches from the information, and wrote them to recordsdata within the particular schema that the crew outlined. Moreover, as a result of the crew used partition projection, they may skip the method of updating the Knowledge Catalog with each partition being added.

This step-by-step method allowed them to substantiate the correctness of their answer earlier than continuing with the whole historic dataset.

With confidence within the accuracy achieved throughout the preliminary part, the crew systematically expanded the backfilling course of to embody the complete 2-year timeframe, assuring an intensive and dependable implementation.

Earlier than the official launch of the up to date answer, a sturdy monitoring technique was applied to safeguard stability. Key screens had been configured to evaluate important features, corresponding to question and API latency, error charges, API availability.

After the information was saved in Amazon S3 as Parquet recordsdata, the next rollout course of was designed:

  1. Preserve each HBase and Athena writing processes operating, cease studying from HBase, and begin studying from Athena.
  2. Cease writing to HBase.
  3. Sundown HBase.

Enhancements and optimizations with Athena

The migration from HBase to Athena, utilizing partition projection and optimized knowledge constructions, has not solely resulted in a ten% enchancment in question efficiency, however has additionally considerably boosted total system stability by scanning solely the required knowledge partitions. As well as, the transition to a serverless mannequin with Athena has achieved a formidable 80% discount in month-to-month prices in comparison with the earlier setup. This is because of eliminating infrastructure administration bills and aligning prices straight with utilization, thereby positioning the group for extra environment friendly operations, improved knowledge evaluation, and superior enterprise outcomes.

The next desk summarizes the enhancements and the optimizations applied by the crew.

Space of Enchancment Motion Taken Measured Enchancment
Athena partition projection Partition projection over the big variety of partitions, avoiding limiting the variety of partitions; partition by event_name and app_id A whole bunch of p.c enchancment in question efficiency. This was probably the most important enchancment, which allowed the answer to be possible.
Partitioning and sorting Partitioning by app_id and sorting event_name with each day granularity 100% enchancment in jobs calculating the sketches. 5% latency in question efficiency.
Time vary queries Splitting very long time vary queries into a number of queries operating in parallel 20% enchancment in question efficiency.
Lowering outcomes set measurement Schema refactoring 90% enchancment in total question time.
Question end result reuse Supporting Athena question outcomes reuse 80% enchancment in queries ran greater than as soon as within the given time.

Conclusion

On this publish, we confirmed how Athena grew to become the principle element of the AppsFlyer Audiences Segmentation providing. We explored numerous optimization methods corresponding to knowledge merging, partition projection, schema redesign, parallel queries, Parquet file format, and the usage of the question end result reuse.

We hope our expertise gives worthwhile insights to reinforce the efficiency of your Athena-based purposes. Moreover, we advocate testing Athena efficiency finest practices for additional steering.


In regards to the Authors

Nofar DiamantNofar Diamant is a software program crew lead at AppsFlyer with a present give attention to fraud safety. Earlier than diving into this realm, she led the Retargeting crew at AppsFlyer, which is the topic of this publish. In her spare time, Nofar enjoys sports activities and is enthusiastic about mentoring ladies in know-how. She is devoted to shifting the business’s gender demographics by rising the presence of girls in engineering roles and inspiring them to succeed.

Matan Safri Matan Safri is a backend developer specializing in large knowledge within the Retargeting crew at AppsFlyer. Earlier than becoming a member of AppsFlyer, Matan was a backend developer in IDF and accomplished an MSC in electrical engineering, majoring in computer systems at BGU college. In his spare time, he enjoys wave browsing, yoga, touring, and enjoying the guitar.

Michael PeltsMichael Pelts is a Principal Options Architect at AWS. On this place, he works with main AWS clients, helping them in creating revolutionary cloud-based options. Michael enjoys the creativity and problem-solving concerned in constructing efficient cloud architectures. He additionally likes sharing his in depth expertise in SaaS, analytics, and different domains, empowering clients to raise their cloud experience.

Orgad Kimchi Orgad Kimchi is a Senior Technical Account Supervisor at Amazon Net Companies. He serves because the buyer’s advocate and assists his clients in attaining cloud operational excellence specializing in structure, AI/ML in alignment with their enterprise objectives.

Leave a Reply

Your email address will not be published. Required fields are marked *