Big Data expertise for a top global retailer

Among many benefits, we support the coordination of supply and transport in times of the COVID-19 through streamlining data access and improving observability of data pipelines.

data_logistics-min
Industry
Retail
Technology
Apache Spark, Apache HIVE, HDP Spark, Kafka, Hadoop
VirtusLab’s team used their Big Data technology stack expertise to develop treasured data solutions for 1 of the 5 largest retailers in the world.
The real big data

The client has been collecting data for 30+ years. The size of the data went beyond what a standard SQL storage could handle (even specialized clustered Teradata). The business was afraid that moving the data to a Hadoop cluster would worsen the performance of daily queries.

Integration of data

This project consists of 3 teams responsible for 3 domains: Store, Product, and Fulfilment. We integrate data volumes from many sources (like RDBMS, Kafka, REST API, files) in each profile to later build analytics from them.

Data is saved on the Hadoop cluster in proper structure. Each data flow is scheduled using Oozie. There are also flows that publish pre-aggregated data from the cluster to external clients. Teams have their own CI pipelines (almost CD) on Jenkins. They allow to quickly build and deploy changes.

 

image for article: Big Data expertise for a top global retailer

Monitoring and alerting

We obtained a common logging model that allows for optimal transfer of logs from the cluster to Splunk. We monitor the situation and pin down various types of warnings for the team and clients thanks to appropriate dashboards defined on Splunk.

Metadata

VL’s team improved also the process of metadata gathering and metadata coverage. Previously the client had very large amounts of data from various systems, but most of them did not have metadata. We integrated with 3rd party system to download existing metadata, then defined a set of metrics to check to what extent the metadata of various objects from the system met certain criteria. Thanks to that, the people responsible for data quality were able to fill the missing metadata and better understand the data they are using or which data they need.

 

image for article: Big Data expertise for a top global retailer

The analytical view

By using data from various sources we are able to track product lifecycles in multiple dimensions and build the whole timeline of a given product. Starting from recipe specification, through an agreement with suppliers, packaging, choosing a range, setting a price, and promotions, to quality checks and decommissioning.

Such an analytical view is being used for various reporting, like checking if products being sold, are getting healthier or helping to choose the cheapest supplier for a given product.

Data pipelines

VL’s teams have built pipelines that perform spatial and graph-based analytics to help optimize deliveries and delivery van schedules. The objective here was to compute statistics for both road networks and delivery journeys to shorten the time wasted on expensive routes and to improve van capacity due to better grouping of customers per journey.

Our work allowed us to upgrade the semi-manual process (which previously took several days per delivery center) using only a chunk of data to a job taking few hours while processing all the historical data available for entire delivery centers. Tools used include Kafka and HBase for data ingestion and long-term persistence, GeoSpark for distributed spatial computations, a JGraphT library for high-performance, in-memory graph analysis of road networks.

 

image for article: Big Data expertise for a top global retailer

Main technologies
icon
Apache Spark
icon
Apache HIVE
icon
HDP Spark
icon
Spark streaming
icon
Kafka
icon
Hadoop
Tech results
1.

Data structure optimization to ensure the performance of data serving.

2.

+200 dataflows with dedicated monitoring and quality assurance.

3.

Metadata metrics covering more than 200 000 tables with over 4 million columns across the data lake.

4.

Reduced analytical customer view building time (over 60TB of data) from 24h to 1,5h with similar resources.

5.

Geographical and spatial analytics.

6.

Lower processing time of spatial and graph data, from months to hours.

7.

Upgrade of the delivery process which took several days per delivery center to a job taking a few hours for all of them.

8.

+6 types of Generic pipelines for most common source types (Kafka, JDBC, CSV, REST).

Business benefits
1.

Shorter time of generating customer health data, from hours to minutes, so the customer may access the most recent statistics in near real-time.

2.

Single source of truth for all data across the company.

3.

Decrease the number of incidents and frauds in-store by identifying suspicious events and alerting the in-store team.

4.

Supporting coordination of supply and transport in difficult times of the COVID-19 through streamlining data access and improving observability of data pipelines.

Looking for Data experts?
You don’t have to search anymore – contact us!

"*" indicates required fields

If you click the “Send” button you agree to the privacy policy. Your personal data given in the contact form above will be processed for purposes of answering your inquiry and for any further correspondence regarding this inquiry. The controller of your personal data is VirtusLab Sp. z o.o. For more information, see our Privacy Policy