Teppei Syokudo - Improving Store Performance: Data

From Analytics Practicum
Jump to navigation Jump to search


Home   Data   Project Management   Documentation   Findings   The Team


Data Exploration

In the very initial stages of the project, the problem is analysed and by looking at the available data and understanding the various aspects of the data. The main aim of this step is to ensure the following:

  1. Maximize insight into a data set
  2. Uncover underlying structure
  3. Extract important variables
  4. Detect outliers and anomalies
  5. Test underlying assumptions
  6. Develop parsimonious models
  7. Determine optimal factor settings. (“e-Handbook of Statistical Methods”, 2016)

Through these steps, ultimately the problem is determined and a solution model is developed with analysis methods identified and worked towards; necessary data preparation is also determined.

Labour Productivity

In exploring the sales-labour data, we first found the mean sales per hour from both MW and RP. It can be seen that the mean sales for MW peak from 11:00 to 13:00 during lunchtime and 18:00 to 20:00 during dinnertime. Likewise, RP’s mean sales peak from 11:00 to 13:00, and from 17:00 to 19:00.

We hypothesized that staff who are more productive will perform better on average as compared to the shop sales on an hourly basis. We explored staff performance by looking at a common measure of labour performance, which is staff productivity.

We attributed hourly sales to each of the staff present in the shop at that hour. We then took an average of the attributed sales for each of the staff by dividing his total sales with the total number of hours that he worked.

However, we realized that there was an hourly effect on retail sales, which affects the labour productivity of the staff. This means that on an absolute basis, if Staff A and B both work the same number of hours, but A works during peak hours, and B works during non-peak hours, A’s labour productivity (Store Sales / Number of hours worked) will be higher than B. This might lead to a possible misrepresentation because Staff A might be poorer at customer service or upselling as compared to B. This leads to a need for data standardization on an hourly basis. For more information on the methodology used, please refer to the Data preparation section.

After standardizing the data, we proceeded to rank them based on their standardized labour productivity and took the top 5 performers, as well as the bottom 5 performers for each store, and plotted their hourly sales, compared to the shops’ average sales.

Our hypothesis is partially true because the top 5 performers almost perform higher than the mean shop sales but mostly during peak hours. The bottom 5 performers also perform lower than the mean shop sales but mostly during peak hours.

This implies that there is value in identifying high performers that perform on a consistent basis. Firstly, we can benchmark staff performance using the top performers. Secondly, we can qualitatively assess the behavior of top performers that affect sales and develop means to train the rest of the staff to be like them.

Product Portfolio Analysis

Looking at both MW and RP’s product sales, the sale of Main - Meal has a decreasing trend, together with Main - Drink. This is likely due to the introduction of Set Menus, where customers tend to prefer purchasing sets rather than ala carte. The indirect relationship between Main - Meal and Set Menu is a lot more obvious in RP. We can see that the moment Set Menu was introduced in November, Main - Meal sales started dropping.

The most popular Main - Meal in both outlets would be the Kaisendon, which is sold 60 times and 50 to 60 times daily on average in MW and RP respectively. It’s relating set is the Seafood Feast which averages 40 and 24 times daily in MW and RP respectively.

We can also see that the sale of Main - Onigiri and Main - Fried have relatively stagnant to decreasing trends in both outlets. This means that the onigiris and fried items may not be very popular items. In order to boost sales of onigiris and fried items, Teppei Syokudo may want to consider introducing onigiri sets and fried item sets.

1.png 2.png 3.png 4 apsm.png


Data Preparation

After preliminary examination of the data, the team has identified that necessary data preparation is required specifically in the following areas:

  • Sales and labour data is currently stored separately; a joining of the data is required before the performance of staff can be analysed.

We standardized the sales and customer numbers on an hourly basis, using hourly means and standard deviations. This allows us to better compare staff performance based on how far is a person’s performance from the hourly average. A good staff will have a high positive deviation from the mean average sales, and vice versa.

Another benefit of standardization is the ability to remove outliers in the data. We define outliers as data points that have sales or customer numbers that are more than three standard deviations from the mean in both directions. This will prevent an underperformer from being shown to have a high performance due to an anomaly, and vice versa.

In the data given for RP, public holiday and weekend information was included with blank spaces even though the store was not open on those days. As these values affected the store mean and standard deviation, there was a need to remove these data points. We identified the public holidays from the Ministry of Manpower’s website for the time period and removed them, as well as the weekends from RP’s dataset.

  • Analysing the productivity of staff requires sales data to be in an hourly format, where the current data only stores the staff start and end work timings – new variables have to be created to indicate if a staff is working on a particular day in a particular hour.
    • Receipt data had to be recoded to rows of transaction data. The following dataset is prepared before data analysis can be carried out:
5.png
    • The data was segregated between set and non-set orders to try to find patterns in both groups of data. In the non-set data, sets and their components are treated as one entire item. For sets, we have chosen to analyse the components within the sets - identifying popular choices from non-popular choices.
  • Before providing useful analysis that can ascertain the product portfolio mix, sales data has to be broken down to each order. This data is currently stored in a POS system and the team is in the midst of investigating if the data can be retrieved and used meaningfully.

Data Analysis Methods

Evaluation of Existing KPIs

Correlation Analysis will be used to evaluate the effectiveness of existing KPIs. The team will then look at a particular KPI variable with sales (eg: drinks% and sales) to find out whether sales really is affected by that KPI.

Proposing New KPIs

The team will be using Clustering and Conjoint Analysis to identify the key variables that impact sales. Clustering sales data with labour data could help identify clusters with high sales, and the combination of variables leading to such sales values. Conjoint analysis would help identify the individual variables that are important for hitting sales values. In doing so, the team would be able to propose new KPIs based on those key variables.

Setting Numerical Targets

Through Clustering, the team will also be able to find out the right numerical targets to set for individual staff. For example, clustering of sales data and labour data could help determine who are the better salespersons who are able to make x # or $x of sales. This could be the numerical target that is set for the lower performing salespersons.

Product Portfolio Analysis

Last but not least, a Market Basket Analysis will be carried out to identify product pairings with high affinity. This will help Teppei to optimize its product portfolio to include only the most popular products. It will also identify suitable product pairings for cross-selling. Furthermore, it will also help to identify hidden trends that can spur new product development.

Data Analysis

Market Basket Analysis

Data Analysis Methodology

Market Basket Analysis is broken down to two broad steps: frequent itemset generation and the creation of association rules. Popular algorithms employed are such as the Apriori and FP-Growth algorithms.

Lattice.png

Consider the above lattice – each of these are itemsets. Algorithms have to identify the most efficient way to traverse the lattice and identify if a particular itemset is frequent. There are various ways of generating candidates for frequent itemsets and pruning, and this is determined by the algorithm used to carry out association analysis. The way the itemsets are generated and association rules created determine how computationally complex the analysis will be. Therefore, considerations affecting the computational complexity of an algorithm have to be determined when dealing with mining association rules for large datasets. These include factors such as transaction width, number of products, minimum support level and max itemset size (Tan, Michael, Kumar, 2005). Since the transaction width and number of products are predetermined, the team has chosen to specifically focus on the latter 2 factors to refine for our analysis - association thresholds and the max itemset size.

An important aspect of association analysis is the generation of frequent itemsets (or the elimination of infrequent itemsets). The minimum support (minsup) and minimum confidence (minconf) is taken into account. These are thresholds used to determine if for A -> B whether the itemsets A and B are frequent itemsets and whether A -> B is an acceptable association rule. While the team has explored algorithms to determine the optimal minimal support and minimal confidence levels such as applying Particle Swarm Optimization, the team has examined the data spread to determine appropriate minimum support and confidence levels.

7 apsm.png 8 apsm.png

Based on the support levels seen in our dataset we can see that most products have a rather low support level of less than 0.05. This is because most customers of the store purchase particular products - namely the Kaisendon and sets containing the Kaisendon. Hence a minimum support level of 0.005 is selected as compared to conventional levels that are 0.1 and higher. Although having a low minimum support and confidence level might create a higher computational complexity, currently the computational complexity of the data mining is low due to the low transaction width and the number of products.

A relatively low minimum confidence level of 0.1 is also selected. A max itemset size of 2 is set since most transactions have a low transaction width of 1.51 (MW) and 1.49 (RP).

Data Analysis Tools

In carrying out MBA, certain considerations have to be made. One important factor is the software or tool used to carry out MBA. Based on the client requirements in this project, the tool used must be one that is open-source and easy to use. While the team understands that there are far greater utility in employing paid software such as Clementine (SPSS), Enterprise Miner (SAS), GhostMiner 1.0, Quadstone or XLMiner, this requirement essentially narrows down the tools that the team is able to use (Haughton et. al., 2003). The tools that are open-source are narrowed down into 3 tools: RapidMiner, R and Tanagra.

9 apsm.PNG

After evaluating the 3 tools, the team realized that though R provided measures and customizability, the learning curve to use R is extremely steep and may not be best for the client based on the non-programming nature of their background. Both RapidMiner and Tanagra is extremely lightweight and easy to use, however the presence of extensive interestingness measures caused the team to choose in favour of RapidMiner.

Data Analysis Measures

Using Rapidminer, six different interestingness measures is collected - Support, Confidence, LaPlace, p-s, Lift, Conviction.

10.png

By analysing the interestingness measures based on three key properties (Piatetsky-Shapiro, 1991) that determines a good measure:

  1. M = 0 if A and B are statistically independent;
  2. M monotonically increases with P(A, B) when P(A) and P(B) remain the same;
  3. M monotonically decreased with P(A) (or P(B)) when the rest of the parameters (P(A,B) and P(B) or P(A)) remain unchanged.

Below is an analysis of the 5 measures based on the above 3 properties:

11.PNG

The analysis shows that lift is a good interestingness measure if the data is normalized and Leverage is generally a good interestingness measure. In essence both lift and leverage serves our purpose in interpreting the analysis results - to measure how many more units of an itemset is sold together than expected from the independent sales. Leverage identifies the difference of A and B occurring together in transactions in a dataset and what would be expected if they were statistically dependent (Piatetsky-Shapiro, 1991). Lift conversely measures directly how many times more often does A and B occur together than expected if they were statistically independent. Consequently, the two measures provides the same ranking or ordering of products since the meaningfulness of these two measures are essentially the same.

Analysis Results

After understanding the interestingness measure to analyse the result set, the team examines the analysis results. The results are broken into 2 sections: sets and non-sets.

Within sets, we look at the association between main courses and their drinks or side toppings. Since main dishes within a set does not vary, they’re the independent variable, the premise and the side dishes or drinks are the dependent variable and the conclusions. We’ve broken down set components to the various main dishes.

12.PNG

The most popular topping is the Hotate Topping and the Hot Green Tea. (We couldn’t analyse the data for RP since there is only one set’s data collected and this provides an inaccurate measure of the set’s components’ association with each other.

13.PNG

For non-sets, based on the association found, we can make the following recommendations for MW:

  • Provide a set for Ebi Katsu and Hokkaido Scallop
  • Provide a set for Salmon Caviar Onigir and Salmon Onigri
  • Buddy Meals consisting the following:
    • Aburi Salmon Don & Kaisendon
    • Katsudon & Kaisendon
    • Katsu Curry & Kaisendon
14.PNG

For non-sets, based on the association found, we can make the following recommendations for RP:

  • Provide a set for Tuna Mayo Onigri and Salmon Onigri
  • Provide a set for Konbu Onigir and Salmon Onigri
  • Buddy Meals consisting the following:
    • Tonkatsu & Kaisendon

We noticed varying product focus from the products associated in RP as compared to MW. For example, the salmon onigri has a higher association with the salmon caviar onigri in MW and for RP it has a higher association with the Konbu onigri. A general observation of the support of products purchased by customers in MW shows that the preference of food is generally more expensive and there’s more focus at quality of product while for RP there is more focus on sets and value for money. A guess would be that the customers at RP are more likely to be office staffs (RP is closed on weekends) and as for MW, casual shoppers that tend to have a higher spending power.

Recommendations

Ultimately the analysis of the data provides 3 recommendations:

  • Highly associated items should be placed near each other or in a set to drive sales
  • For items frequently bought together, giving a discount for an item will drive the sales of the other significantly.
  • Avoiding the Profitable Product Death Spiral - we should not eliminate unprofitable products that are attracting profitable customers

For the items such as the onigris that we’ve identified earlier or even some of the fried dishes, by putting them nearby or giving discount to one or the other, an increase in sales can potentially be driven. Similarly, putting these items in a set will see an increase sales volume as well.

We see that some of the side dishes that were bunched together with the Kaisendon are rather unprofitable and are also low in sales volume. However these product are attracting customers to purchase the Kaisendon and hence we should not exclude these seemingly unprofitable items.

K-Means Clustering

Data Analysis Methodology

Our objective is to help the Teppei managers to differentiate the workers by performance. A common means of doing so would be to use a bell curve to grade the workers. This approach, which is most commonly used in education is a statistical method of assigning grades designed to yield a pre-determined distribution of grades among the students in a class. While bell-curving is a convenient approach to measure employees against other employees, it requires the manager to impose arbitrary benchmarks that may be too rigid at times. For example, if a manager classifies top performers as staff who fall within the top 25 percentile, he may risk excluding staff who perform much better than the rest of the staff but fall right outside the top 25 percentile.

We propose the use of clustering to differentiate staff by performance for three reasons. Firstly, clustering will allow a more dynamic way of classifying workers by performance. As staff are grouped according to their attributes, staff who perform similarly will be grouped together instead of being separated by arbitrary performance benchmarks. Secondly, clustering is useful when there are multiple types of attributes in a dataset, making it difficult to compare data points (Morissette & Sylvain, 2013). In our case where staff can be graded on their ability to bring in more customers or more sales, clustering will be able to differentiate staff who perform better in the form of bringing in more customers, and staff who perform better in the form of bringing in more sales. This is not possible in a single attribute bell-curve. Lastly, clustering analysis gives the manager a good balance of flexibility and rigor in his analysis by allowing him to control the desired number of groups, and at the same time finding the optimal groupings of staff who perform similarly. This is a more valuable alternative as compared to trying to visualize groups of staff using a scatter plot.

There are multiple ways to use clustering. We chose to use K-means clustering as it is more applicable in this scenario for two reasons. Not only does it allow managers to explore different numbers (K-values) of ways to cluster their staff, it also allows them to fix an optimal number of clusters and implement an incentive program that is tied to each cluster. K-means clustering analyses the Euclidean distance between data points and form K clusters of data points. It first assigns K number of data points to become centroids and calculate the Euclidean distance from the rest of the points (non-centroids) to these centroids. It then assigns non-centroids to be clustered with the nearest centroids. The process is re-iterated for a fixed number of times for K clusters to try to maximise the inter-cluster distance (distance between each of the K centroids) and the intra-cluster distance (distance between non-centroids and their cluster centroids).

Data Analysis Tools

The client has specified that they would like to execute or modify the analysis methodology and therefore preferred if we used an open-source software that requires minimal coding. We chose Rapid Miner over SAS, JMP and R because it is open-source, user-friendly to managers who have no technical background and has a wide range of functions, including K-means clustering.

Analysis Measures

In order to perform K-means cluster, the manager has to set the optimal K number of clusters before he proceeds with his analysis. Even if the manager has a K value in mind, he is encouraged to use cluster performance measures to measure the quality of the clusters that are formed with respect to different K values.

We have selected two different Data Analysis measures that are commonly used for K-means clustering. We chose to use two measures instead of one to test the consistency of both measures and increase the rigor of our evaluation. Ideally, the optimal number of clusters should score the highest on both indexes.

The Silhouette Index evaluates the consistency within clusters of data by measuring the similarity of each data point to the data points within and without its cluster. If a cluster’s Silhouette Index, which is the average Silhouette score of all its data points, is high, it means that its data points are very similar to each other, and dissimilar to other data points in other clusters, and vice versa.

The Davies-Bouldin index (DB) measures how good the clustering scheme is by taking the ratio of the average scatter within a cluster to the average distance between clusters (in this case, Euclidean distance). If a cluster’s Davies-Bouldin index is low, it means that its data points are tightly clustered relative to their distance from the other clusters. To select an optimal K value, a manager should plot the K-values with respect to the Silhouette Index and the Davies-Bouldin index. The K value with the highest Silhouette index and the lowest Davies-Bouldin index will yield the most optimal clusters.

Data Analysis Results

Select the K-Value

It would be intuitive for a manager to set K = 2 or 3, in order to differentiate between the good, average and poor performers. However, we tested a range of K = 2 to K =5 so as to fully utilize the ability of the clustering analysis method to achieve a deeper level of differentiation. We did not go higher than K = 5 as we recognized that the benefits that ensue from marginally better clusters will not justify the diseconomies of scale in the form of a complicated incentive program for too many groups.

15.PNG
16.PNG

It is clear that the Silhouette index decreases for both stores, as the number of clusters increase. DB however, shows different results for MW and RP. DB increases as K increases for MW. For RP, DB increases as K increases to K = 3, and then decreases thereafter.

Based on the definitions of both performance measures, it is clear that K =2 is the most optimal, with the smallest DB and the larges t Silhouette index value. The second most optimal K is K = 3 for MW and K = 5 for RP.

17.PNG

With respect to the Cluster Analysis results, there are a few insights that can be drawn.

Firstly, based on both performance measures, K = 2 seems to be the optimal number of clusters. However, it does not have any useful business implications. The analysis shows that there seem to be two outliers who perform much worse than the rest of the group in terms of customer numbers and shop sales.

The two data points belong to Seraphina and Motoki, who joined Teppei on the 9th and 26th of December respectively and might be performing lower because of their lack of experience.

When K = 3, there seems to be a distinction between good performers and bad performers, with almost all of the good performers performing above shop average for Sales and Customer count.

When K =4, there seems to be a distinction between average performers and bad performers, with majority of the average performers in cluster_0 performing around the shop average for Sales and customer count, whereas the bad performers perform below shop average for both attributes.

When K = 5, not much more insights are being generated.

18.PNG

For RP’s clustering results, the same goes for K =2, where there is not much business implications, other than the fact that there are two outliers, Megumi and Irwin. In this case, Megumi is a veteran who has been working in MW since June, but recently got transferred to RP and spent a couple of days working there. Irwin is a newcomer who just started work at RP. This implies that transfers between stores might affect staff performance too.

For K = 3, although there is a distinction between good and bad performers, it is clear that there can be deeper differentiation as the good performers are separated between staff who sell to a lot more customers, and staff who sell to less customers but earn more revenue.

For K = 4, the differentiation can be seen between staff who are low performers who perform below the shop average sales and customer count (cluster_2) Furthermore, there is a difference between high performers who sell to more customers but is not the most productive with respect to sales (cluster_3) and high performers who is the most productive with respect to sales but not customer count. Managers can utilise this information to develop training programs for each high performer group to strengthen where they are weaker at.

When K = 5, there is further differentiation between the average performing (cluster_0) and the low performing (cluster_3). Managers can utilise this information to study the low performers and find out why they are performing poorly.


Recommendations

The clustering results clearly suggest the benefits of segmenting staff by their performance using the clustering method.

Teppei managers can utilise the results from K = 4 to identify the good performers and study their qualities. Training programs can also be developed specifically for each group to strengthen their weaknesses.