This post was originally published by Richard Farnworth at Towards Data Science
Big Retail + Big Data
Supermarkets are big business and they use data on a big scale. Originating in the US in the 1930s, supermarkets have since gradually taken over a bigger and bigger share of the retail and grocery market. Giants like Wal-Mart, Aldi and Carrefour are among the largest retailers in the world with revenues approaching the hundreds of billions. As such many have invested heavily in big data, with analytics and data science forming a core part of their decision making.
Every product purchased, along with its price, is recorded in gargantuan databases, with tables exceeding hundreds of billions of rows. Loyalty schemes, where customers accumulate points by scanning their loyalty card at each purchase, allow the company to stitch together a customer’s entire history of transactions, gaining more insight than through looking at baskets in isolation. The richness of this data provides value in many ways across the organisation, a few examples of which are explained below.
Supermarket shelves are hot property. Every square inch of each aisle is potentially worth thousands of dollars per year and supermarkets go to great length to make sure none of it is wasted on products that don’t perform well. But “performing well” isn’t often as straightforward as picking the highest selling products or those with the highest margins. If it was, the whole store would just be milk and bananas. You have to cater to all the different customers who come into your store and the different meals and “missions” that drive them through the door.
For example, a particular condiment might not be a superstar seller, but if it’s important to the older demographic, then removing it from the shelves might force them to shop elsewhere. Also, imagine someone is planning to make burritos this evening. If they can get most of the ingredients in your store, but you don’t sell tortillas, they may end up taking all of that potential revenue to a competitor.
At the same time, a diverse range costs money. Aside from the aforementioned shelf real-estate, there’s the complex logistics of managing a large range of different products. You have to be able to move products from supplier to distribution centre to supermarket to aisle, arriving “just in time” so that the available stock on shelf neither overflows, nor runs out. More products mean more supply lines to manage and less shelf-stock to act as a buffer. Each product range also adds further work to the expensive contractual negotiations between supermarket and supplier, where things like price, promotions, levels of supply and advertising spend are agreed.
If you’ve been in a budget supermarket like Aldi, you’ll notice they often have a smaller choice in products for each product type, but maintain higher stock levels in store. This is precisely to cut down on the above costs, enabling lower prices at the expense of choice, before reducing quality.
This all makes for a very complex optimisation problem in which Data Science plays a pivotal role. Products are regularly assessed against a number of criteria, such as sales, profitability, number of customers who purchased it and the loyalty of those customers to the product when it is not on promotion. Machine learning models, trained on past examples of range changes can be used to predict how customers will react to proposed changes in the future. By taking store characteristics into account, such as size, local demographics and proximity to competitors, ranges can be optimised on a store by store basis.
For example, imagine a humble tin of canned tuna. It sits on a shelf with many other cans of tuna, with different flavours, brands, price points and pack sizes. If you removed it from the store, most people who were looking for it might simply switch to another canned tuna product. A small minority might postpone their purchase or look to buy elsewhere. For a product with a highly loyal customer base, such as cans of Coca Cola, this would play out differently.
All of this data analysis and modelling helps store category managers regularly assess the effectiveness of their range, optimising it for efficiency while trying to keeping customers satisfied.
Price elasticity is a measure of the change in demand for a product, versus its price. Put simply the cheaper something is, the more people will want to buy it (with some exceptions). Pricing a product well means finding the point on the elasticity curve that delivers the most profit, by balancing the margin on each pack against the number sold.
Things get a little more complicated when you introduce competing products into the equation. If you drop the price of Coca Cola, its sales will go up, but it will also negatively affect the sales of Pepsi. This introduces the idea of Cross-Price Elasticity, which models customers’ choices within a price “landscape”. Prices must be calibrated effectively within the context of the product category, both to maximise overall profit and to give the customer a clear delineation of value. Goldilocks pricing, where there is a good option, a better option and a best option is very common with the precise price gaps between the products decided through careful modelling.
With fresh produce, the equation is a little different. Fruit and vegetables are planted months in advance, and cannot be harvested to order. Crops are picked according to the time of year and the weather. As soon as they leave the farmer’s field, it’s a race against time to get them onto the shelves and out through the checkouts before they go off. For the supermarket, this means predicting the best price to shift that week’s harvest, whilst maximising margin. If 2 million tomatoes are plucked from the vines this week, then we need to sell 2 million tomatoes. Price it too low and it’s a missed opportunity followed by empty shelves. Price it too high and you’ll end up with an aisle of rotten tomatoes (or at least you’ll have to heavily discount towards the end of the week, wiping out all of your profit).
There are typically three main types of promotion:
- X% off — generally intended to encourage people to try something new or switch to a typically more expensive product. It drives up sales in the short term, but the hope is that some of those customers to switch their behaviour over the long term, making them more valuable customers.
- 3 for the price of two (or X for $Y) — “Multi-buys” are designed to increase the basket size and the value of existing customers to a product / category. By sending home the customer with more stock, you are trying to bring forward potential future purchases and potentially increase the customer’s rate of consumption. For example if a customer buys a $3 block of chocolate once a fortnight, but takes advantage of a “two for $5” promotion. They might wait four weeks before the next purchase (pantry stocking), but they also might just eat twice as much chocolate, thus changing their behaviour and potentially becoming more valuable over time.
- Every day low price — designed to be comparators to prices in competing supermarkets that drive people into the store. For example, if nappies are always cheaper in your supermarket, then many parents of young children will do their entire shop in your store, bringing in hundreds of dollars in associated revenue.
When choosing a strategy, cross-price elasticity modelling looking at historical promotions can be used, as well as to set benchmarks for how well we expect the promotion to perform. This can also inform promotional depth, frequency, and which products you should not be promoting together.
Measuring the effects of promotions against their original objectives is important, to make sure you’re not just giving money away to people who would have bought those products anyway.
A few years ago, promotions were solely advertised using broadcast media, or through weekly catalogues. These two strategies are expensive and broad-brushed — despite the huge diversity in the customers who come into store, you can only send one message. Data science has changed all this with personalised communications to subscribers’ inboxes.
Woolworths in Australia sends out weekly marketing emails to its few million loyalty card members. Each one is personalised based on a huge model containing millions of features per customer. Instead of just highlighting that week’s biggest promotions, the model takes into account the customer’s previous buying behaviour, including the length of time since they bought specific items. This means the information is not only relevant to the customer’s tastes, but its recommendations are likely to relate to the things they’re running out of this week. By giving such personalised information, the likelihood of customers paying attention, and then going into store and making a purchase, is greatly increased.
It’s difficult enough to determine how to price, promote and position products that have been on the shelves for years, but the challenge is greater still when a product is new. New products require lots of upfront investment in R&D, testing, certifications, production capacity and marketing. If they sell well, it will be paid off many times over. If they flop, that’s a lot of money down the drain.
Understanding the market and trying to find a gap is within the scope of Market Research. Qualitative data gathered through surveys and focus groups is combined with quantitative data from other markets to identify potential new products and estimate (with some broad assumptions) the size of the opportunity. Data from the supermarket itself can be used to fill out the picture, by looking at how loyal customers are to competitor products, how sensitive to promotions the category is and how new product launches have performed in the past. Once the product has been launched it is benchmarked against the progress of other new products, adjusting for its own particular attributes and promotion schedule. This helps suppliers and supermarkets make earlier decisions as to whether to continue production or to cut their losses.
Whilst supermarkets are already big users of data science and AI, there are many interesting concepts that may become more mainstream in the years to come.
Customer tracking is imperfect in its current form. Using loyalty cards to identify customers will bias your “identified customers” towards those with a more frugal mindset. Using credit card information somewhat augments this, but even then, people often use multiple cards and may use their partners card. Also is the problem that if someone enters the store but leaves without buying, their visit is unrecorded (this is more of an issue in clothing retailers other than supermarkets). Facial recognition technology and blue-tooth beacons attempt to plug these gaps and even provide data on how people move around the store. Having data on what a person walked past, what they paused at and how long they spent in store will bring further refinements in the layout of the store and the effectiveness of in-store promotions. Obviously there are major ethical implications with this technology though which may slow its roll-out, at least in the west.
Checkout free stores have been a major talk point since the first Amazon Go opened in Seattle a few years ago. Customers simply take the items they desire from the shelf and walk out the door. Items are tracked as they leave the shelf using RFID and payment occurs automatically via an app on the customer’s mobile phone. Reducing the time to make a purchase enables more frequent and impulsive shopping with a cheaper running cost.
Customised pricing/promotions are also an interesting possibility. Different customers have different budgets and place different values on products. Being able to give specific customers money off on specific products allows the supermarket to promote products very efficiently without simply giving money away to those who would have bought anyway. Some supermarkets do this already but rather than money off, they reward their customers with loyalty points. This has the double benefit of giving the customer a discount, but making sure they spend what they “saved” within the supermarket’s loyalty ecosystem.
Structurally clean but practically messy
Working with supermarket data is a dream compared to data from many other sources. It’s incredibly high volume with thousands to millions of transactions per week, so you can measure even very small effects with a high degree of statistical significance. And most supermarkets have teams of data-engineers doing all the technical integration work so by the time it reaches the hands of data scientist, it’s clean, concise and comprehensive.
But the complexity of studying a dataset involving an ever changing landscape of tens of thousands of products, in hundreds of stores, bought by millions of customers billions of times over can be overwhelming. No two weeks are ever the same — think Easter (moving around every year), what day of the week Christmas is on, public holidays, changes to the range, product shortages, seasonality, weather and broader economic conditions. Never mind global pandemics! Despite this, supermarket data is a rich and diverse window into the lives of people from across society and is something I’ve very much enjoyed working with. The list above is far from exhaustive, and different retailers use data to more or less of a degree. But in the 21st century being data savvy is essential for supermarkets to compete.
This post was originally published by Richard Farnworth at Towards Data Science