ANLY482 AY2017-18T2 Group10 Project Overview: Data

From Analytics Practicum
Jump to navigation Jump to search

Tennet logo.png


HOME

ABOUT US

PROJECT OVERVIEW

ANALYSIS & FINDINGS

PROJECT MANAGEMENT

BACK TO MAIN ANLY482

Overview

Data

Methodology

Due to confidentiality, we will not be able to upload any charts onto the wiki. The fully disclosed analysis report is available in our Interim Report submission.

Data Overview

The data provided by the sponsor is in Microsoft Excel format for each outlet by month. The team has used Python for the cleaning and preparation of the data. For now, they have provided the data for a total of 24 months from Dec 2015 to Dec 2017. The data that was given to us are Inventory Data, Monthly PLU (Programmable Logic Unit) and Sales Data. One limitation is that the company has recently changed the format of the inventory data, and thus we would be working with 2 different formats of inventory data. Below is a short description of the each dataset:


Dataset NameDataset Description
InventoryDescribes the inventory order for each outlet the data is updated daily.
SalesDescribes the sales for each outlet for each month daily.
DailyPLUDescribes the number of patrons for each outlet according to the type of meal daily.
Purchase DataDescribes the number of ingredients ordered each outlet daily.


It is noted that the sponsor only performs stock taking once a month and hence only monthly inventory data is available. This means that there are no daily or weekly stock levels recorded. However, the objective of the project is to be able to forecast the demand for each ingredient on a daily basis. The proposed idea is to first forecast the demand for a particular month. Subsequently, we will use the PLU data which contains the number of pax daily to breakdown the forecasted monthly usage into a daily value.

Purchase Data Preparation Process

Purchase refers to the amount of ingredients ordered by each outlet on a daily basis. The data that we have is from December 2015 to December 2017. For the purchase data file, we identified many errors with the data.

Firstly, the date cannot be used due to the way it’s structured, this is because for data analytics, one row should only contain one critical information and not many critical information. In this case, there were many critical information, i.e. for example the quantity and amount purchase in 1st Jan, 2nd Jan, etc. Secondly, ProductCode and ProductName have missing data. Upon clarification with our sponsor, we were informed that we could ignore these rows. Additionally, the purchase data would be used to merge with inventory data. Lastly, not all rows has OutletNameLocation or SupplierName and removed all OutletNameLocations that is irrelevant to our project. As such, our group removed missing and irrelevant data, filled up any empty rows in OutletNameLocation and SupplierName, melted the data according to date such that it could be analysed, and we concatenated all the files into a single file so that we could analyse all the months. The diagram below has been provided to help visualise the entire data cleaning process.

Tennet Purchase Cleaning diagram.png

Inventory Data Preparation Process

As there was a change in format of the Monthly Inventory Data from October 2017 onwards, there are two main different types of formats for the Monthly Inventory Data. The two different formats have different column names and different number of columns. Hence, to perform our analysis and EDA, we had to process the two formats separately. Using Python scripts, we extracted the necessary columns from each file, standardised the column names and compiled them into a giant CSV data file - ‘Inventory_Processed_2016-2017.csv’

Tennet inventory processing diagram.png

Sales Data Preparation Process

Sales data refers to the daily sales that ABC Company everyday for each outlet broken down to various categories, like Card, Nets, cash, etc. For the sales data, we have the data between December 2015 to November 2017.

Since the sales data is relatively clean and ready to be analysed, there is very little to do. Our group only prepared the data by adding another calculated column called “revenue”, which is the sum of column “nett_sales” and “service”, and we concatenated all 4 excel files into one single file. Using Python scripts, we standardised the column names and compiled them into a giant CSV data file.

PLU Data Preparation Process

Daily PLU refers to the number of patrons that each outlet has daily, in various categories (Adult, Child, Student, Tourist, Senior and FOC Pax) There were several issues with the data. Firstly, all the days are in the same row, which makes it hard to analyse the data. In addition, there is a column “Dept_Type2” in the sample data which is irrelevant to us, for example “Soup Base”. According to our sponsor, we are only interested in 6 categories (Adult, Student, Senior, Tourist, Child and FOC PAX). There are also many different rows of the same category due to the irrelevant description. For example, an adults from an outlet can have different descriptions, such as “Wkend Adult Dinner Buffet” or ‘Wkday Adult Dinner Buffet”, since this is irrelevant to us in the analysis, we would be grouping all the categories into one row for each outlet. After, cleaning all the data issues, we concatenate the various Daily PLU data into one file to make it easier to analyse the daily patrons. In addition, since we wanted to work with the weekly day, we should sum the total customer count in a week and group them together.

From our exploratory data analysis, we realised that on some days, we have 0 values. After speaking to our sponsor about this, he informed us that these days have 0 values because of private events, company events of cleaning days where they would not be serving any customers. Since these 0 values would affect our analysis, we would be getting the 2 year average by outlet, customer group and day of week and replace these 0 values accordingly.

As such, we will be cleaning the data using the following steps:

  1. Remove irrelevant columns. Removing columns that are irrelevant for our analysis such as transaction type, group_id and dept_id.
  2. Unpivot the data. Each row contains the number of customers of multiple days, we unpivot the data such that each row contains the number of customers for only one day.
  3. Filter out irrelevant rows. In the column ‘Dept_Type2’, we are only concerned with Adult data. However, there are other categories such as Chope, FOC Pax and 12 other columns
  4. Including important columns. Introduce columns such as week number and year to give our models more features to analyse
  5. Impute 0s with 2-year average. Calculate average for each outlet and category of customer for each day of the week and impute the respective value of the specific day of the week to rows with 0 values.
  6. Group by the week number. Since we will be forecasting weekly customer count, we sum the customer counts that belong in the same week together.