ANLY482 AY2017-18T2 Group06 Project Overview

From Analytics Practicum
Jump to navigation Jump to search
Logo.PNG

 

HOME

ABOUT US

PROJECT OVERVIEW

ANALYSIS & FINDINGS

PROJECT MANAGEMENT

DOCUMENTATION

MAIN PAGE

 
Proprietary trading has long relied on computers to help automate and execute trades. Data scientists, or more commonly known as Quants by Wall Street, have developed huge statistical models for the purpose of this automation. These models though complex, are somewhat static and as the market changes, a commonality in finance markets, they do not work as well as they do in the past.

As technology advances, we are entering into an era of Artificial Intelligence and Machine Learning. Systems have capabilities to analyse large amounts of data at enormous speed and improve themselves through the process. This evolutionary computation and deep learning is seen to be able to automatically recognize changes in the market and adapt in ways the previous statistical models fail to do so

pH7 is a private investment and consultancy firm that serves clients who are keen to appreciate their wealth and grow their capital. It has its humble beginnings in 2013 in Singapore and has been working hard to build relationship with clients to understand their business and personal concerns. With its strong information analysis capabilities and experience, pH7 provides business opportunities and solutions that are customised to clients’ needs. In addition, pH7 leverages on cutting-edge technology in their work, excelling in professionalism and productivity.

By partnering with market platforms that boasts state-of-the-art technology and competitive market access, pH7 aims to capitalize on every investment and business opportunity present in the markets. It aspires to be a firm of excellence and distinction which boasts of its professionalism in dealings and partnerships with clients.

 

MOTIVATION

 

OBJECTIVES

 

METHODOLOGY

Exploratory Segment

1. Data Collection
At the initial phases of data collection, we must ensure that we have the sufficient fields that are needed for modelling in the later stage.

2. Data Cleaning + Transformation
In the data cleaning and transformation phase, the data would be tweaked into necessary statistical and analytics parameters necessary for prediction later.

3. Initial Data Exploration
In this area, the data would be initially explored and we would determine the approach of modelling based on the nature of the dataset. Necessary preparations such as checking for multicollinearity of the variables would be taken into consideration before modelling of the variables would be done. Due to the nature of our dataset, careful data exploration must be done.

Iterative Segment

4. Model Building
Creating model, determining predictor and target variables. In this area, we would be experimenting with multiple different approaches based on our initial understanding of the dataset after the exploration. It could range from visualizations to machine learning algorithms to achieve the objectives of our client.

5. Model Validation
We would be proposing a multi-variate methodology of sampling data in order to validate our model. In this aspect, we would be using the 3 way of approach of model validation called “train, test and validate”. Due to the nature of the project, we would like to avoid overfitting and bias in our models so we will be aiming for a more rigorous testing process with a larger sample data size to avoid such issues.

We will also be using benchmark metrics to test our predictive modelling to ensure that it is satisfactory. Should it not be satisfactory, we would go back to phase 4 of model building or phase 2 to rebuild the model until the results is satisfactory.

Actionable Segment

6. Prediction/ Prescription
After the, modelling is completed, we intend to merge the model into our client’s existing system with brokerage system in a form of forward testing. The predictor will also be done as a real time prediction.

REFERENCES