IS480 Team wiki:2017T1 Ducky King Midterms

From IS480
Jump to navigation Jump to search

Duckyking home logo.png   HOME


Duckyking about us.png   ABOUT US


Duckyking project overview.png   PROJECT OVERVIEW


Duckyking project management.png   PROJECT MANAGEMENT


Ducky king documentation.png   DOCUMENTATION


Midterm Header.png

Project Progress Summary

Project Highlights

  • Took 3 weeks to be fully familiarised and implement Object-relational mapping (Sequelize)
  • Took 2 weeks to be familiarized with logging features and REST security vulnerabilities
  • Built an administrative statistics dashboard, to ensure monitoring of Ethereum network and trading activities
  • Improved security by implementing JSON Web Tokens
  • Revamped traditional sql queries to use Object-relational mapping for better maintainability
  • Setting up of logging using ‘winston’ library for production-ready build


DuckyKingMidtermsSlides.png DuckyKingMidtermMin.png

IP addresses for the MidTerm demonstration will be released via secured channels.


DuckyKing Midterms X-Factor.png

Project Management

Project Status

DuckyKing Project Status.png

Overall Project Scope

Overall Scope Functions.png

Midterm Scope Focus

For the sprints leading up to the Midterms, our team focused on the following functions:

Midterm Scope Focus.png

Project Schedule

DuckyKing ProjectSchedule Version 3 MidTerm.png

Project Metrics

Team Ducky King has applied the following metrics to nurture efficient and effective project planning and execution. Out of the 6 metrics, the team introduced 2 metrics after Acceptance, namely, Bus Factor and Production Ready Metrics. Bus factor allow us to understand each member’s cross-domain competency by measuring various competency across the teams. Production Ready metrics helps to come up with achievable production ready actionable tasks to work towards to. This enables the team to understand what it takes to build a production ready, robust application for our sponsor.

Metrics used are shown below:

Release Burndown Chart

The Release Burndown Chart is a visualisation tool to display the amount of work completed after each sprint against the ideal projected rate of completion for the project.

Ducky king release burndown.jpg

Sprint Velocity Chart

Ducky king velocity chart.jpg

Sprint velocity chart is a measurement tool for the amount of work that Team Ducky King has completed during each sprint. Past velocity will be useful in helping to predict how much work can be completed by the team in a future sprint.

Scrum Burndown Chart

Scrum Burndown Chart offers a simple, yet effective visualisation for scope and schedule management. With this chart tool, the team would be able to mitigate schedule overrun with daily visibility.

The blue line in the scrum burndown chart represents the ideal progress for the sprint. The red line represents the actual progress made by the team during the sprint.

The team would like to highlight several key point for the following sprints after Acceptance:

Scrum burndown sprint 9.jpg

Team Ducky King did not finish one task, ORM integration, which had a spillover of 8 points because the member, Sally, who was assigned the task fell sick. Inevitably, the team has to bring the task over to Sprint 10. With careful planning and careful consideration of the dynamics and competency of the team, the team added the story points into Sprint 10.

Bug Metrics

DuckyKing Bug Metrics.png

Production Ready Metrics

Production Ready Metrics is updated after every sprint. The metric also guides the team in sprint planning.

Production Ready Metrics.png

Bus Factor

Ducky king bus factor.JPG

Pm comptency.JPG
Dev competency.JPG
Bd competency.JPG
Qa competency.JPG

Project Risks


Technical Complexity

Understanding Ethereum and Infrastructure Configuration

Ethereum is a powerful yet, complex technology. To fully understand how Ethereum and other blockchain technologies work, our team read white papers and discuss the algorithms that these blockchain technologies build on. This allows us to further understand the core technology which drives these powerful engines. Understanding Ethereum helped us cross a huge hurdle towards configuring a private, easily configurable blockchain. We have to build and configure a private blockchain which has to be far more efficient than the current public Ethereum blockchain.

With this in mind, we read up on various private Ethereum implementations and how to implement it. With the help of our sponsors and Prof Paul Griffin, we managed to set up our own private yet easily configurable blockchain.

Private eth github.jpg
Guides like this give us a brief introduction into configuring Geth and how these configurations will impact our blockchain
Geth config.jpg
Geth configuration may look simple but understanding and manipulating the attributes of the configuration require research and prior technical knowledge

Smart Contract

Flowlabs is developing a trading platform leveraging on the power of Ethereum blockchain technology. What makes the Ethereum blockchain so powerful is its ability to implement Smart Contracts. A Smart Contract is a self-enforcing computer protocol intended to facilitate, verify or enforce the performance of a contract. Smart contracts can help with exchanging money, property, shares or anything of value in a transparent and conflict-free manner while avoiding the need for a middleman.

For Flowlabs, it needs the ability to create virtual auctions and bids on the Ethereum blockchain. We enabled that by creating smart contract auctions and bids on the blockchain. This smart auctions and bids are able to take in inputs and produce an output based on the logic written into them. In order to write the logic into the smart contracts, we had to learn a whole new language called Solidity. This is a programming language written for Ethereum to build Ethereum Smart Contracts. As this is a fairly new language, there are very few learning resources on the internet and we had a lot of trouble picking it up. Luckily we received plenty of help from Professor Paul Griffin and he was able to point us in the right direction. As we have signed a Non-disclosure agreement in regards to the code, we are not able to show an example of the solidity contract that we have written.

Sample smart contract.jpg
Example of typical solidity contract. Actual codes are not shown due to NDA

Not only did we have to build the smart auctions and bids, we also had to find ways to test them. We started using Remix, an online Solidty IDE. Remix allowed us to try different inputs and check if the results that come out are as expected. However, this was an inefficient way to test the smart contract. Therefore, we decided to implement automated testing for the smart contracts. We achieved this with a framework called Truffle. Truffle allowed us to automate contract testing for rapid development. Instead of testing each input using Remix, we were able to write test cases using Truffle and check if our contracts were written correctly.

Truffle testing example.jpg
Example of Truffle Automated Test Case

Truffle testing output.jpg
Example of Truffle Automated Testing Output

Lastly, the Flowlabs platform anonymises its users and the details of its bid. This means that other bidders on the platform are not able to view other bidder’s bids. The only person who is able to view the bid details is the auctioneer and he can only view bids pertaining to his auction. The bid details are encrypted with the Auctioneer’s Public Key so that only the Auctioneer can decrypt the encrypted bid details. The encrypted bid details are then stored in the Smart Contract. At the Smart Contract level, only the Auctioneer’s account can retrieve the encrypted bid details.

Bid encryption sig gen.jpg
Bid Encryption and Signature Generation

Bid decryption sig valid.jpg
Bid Decryption and Signature Verification

Hardening of REST endpoints

Security is an important aspect of every production-ready/live application. Our Flownode is no different and security vulnerabilities includes vulnerabilities stemming from Ethereum network and our RESTful server (NodeJs). However, as discussed with all relevant stakeholders, there is no need for the team to prioritise its effort hardening or developing security measures on the Blockchain level. The principle considerations for accepting the current Flow Blockchain risk landscape are as explained below.

Why we do not focus on hardening the Blockchain Network?

1. The Flow Blockchain Network is based upon a private Ethereum chain
Ether is the digital token that is produced in a functioning Ethereum Network. In public Ethereum Network, these digital tokens contains monetary value which incentivizes and motivates bad actors to attack the network. Although “Ethers” are produced within the network, the Ethereum chain is not connected to the public network. The difference in setup results in us using a separate and privately own ledger. As such, the “Ethers” produced within our network does not have any monetary value. The “Ethers” are merely used to fuel the transactions which happens within our network. As a result, the lack of monetary value greatly demotivate actors to target our Flow Blockchain Network.
2. Only participating organisations of Flow Ecosystem have access to the FlowNode
The FlowNode software is a proprietary software which is distributed to only participating organisations. The FlowNode will connect to a dedicated set of FlowNodes to join the Flow Blockchain Network. In other words, these participating organisations do not join the Flow Blockchain Network via other organisation’s FlowNode directly. Hence, they do not have awareness of other participating organisations’ systems.
In an unlikely event which the attacker knows the IPs of the FlowNodes (assumption), he/she may attempt to join the network. However, due to the potential scale of the Flow Blockchain Network, it is statistically unlikely that the attacker possess a huge amount of computational power (more than 51% of the hash rate) such that it is greater than the combined hashrate of the legitimate FlowNodes. As such, he/she is unlikely to succeed in a “51%” attack, in attempt to “rewrite” the ledger (“history”).
In future development, whitelisting mechanism on the smart contract level may be considered for implementation. IP whitelisting on the FlowNode level may be considered for implementation as well.

Due to the measures as mentioned above, attack vectors on the Flow Blockchain Network are greatly reduced. Thus, the risks are reduced to an acceptable level due to the current business process and system design. Furthermore, the inherent cryptographic functions built natively within the Ethereum source code are proven to be state of the art. There had not been any known cases of vulnerabilities within the source code itself. The news of “Ethereum Hacking” are due to the vulnerabilities of software built around Ethereum but not due vulnerabilities within Ethereum.

After a series of discussions with relevant stakeholders, the team has decided to focus its security effort on hardening the RESTful Server (Middleware). The team will be adhering to a set of industrial recognized practices to harden the middleware.

Ducky king restful security checklist.png

Quality of Product

1. Coding Standards
As with all freelance project, developers often take the path of least resistance and develop applications that are not compliant with industry best practices. This usually takes place because the project manager does not take a hard stance right at the start of the project about the coding standards to use. Even after setting the coding standards, there is usually no one to ensure the adherence to the standards.
This is why we started this project differently. When we first started the project, we decided on the coding standard that we would be adhering to. We decided to follow the AirBnb JavaScript style guide as it is one of the most followed coding style guide. It is followed by the likes of Billabong, Evernote and General Electric, making it the industry standard for coding practices. The AirBnb style guide is being enforced by our Software Architect Kong Yu Jian.
This is to ensure consistent coding styles among the developers. Such as the number of spaces, the use of double quotes, single quotes and backticks. This will allow for greater maintainability of the codes and ensures the code base remain adaptable to changes.

2. Git Workflow
Our team adopts the Git Workflow. The main reason behind it is because it provides a robust framework for managing project of this scale. This is because at any point in time, there may be multiple developers working on different features at the same time. Git Workflow allows multiple feature developments to carried out in parallel.
This workflow is very similar to that of the Feature Branch Workflow. The main difference is that it assigns very specific roles to different branches and defines how and when they should interact with each other. It also uses individual branches for preparing, maintaining, and recording releases.
Ducky king git workflow.JPG

3. Usage of Adapter Pattern
The middleware has to return responses in appropriate structure so that it is easier for the for the consuming application to extract the relevant information from the request. As described, the middleware has to communicate with other system as such the database and the ethereum node. The data used in the communication is of various format and it is the responsibility of the middleware to ensure that it conforms to the required format when communicating with other subsystems within the FlowNode. As this project is undergoing continuous development, one of the main design consideration is to ensure the design of codes is loosely coupled. One of the approach to resolve this challenge is to use Adapter pattern in designing software components.
Adapter pattern is a software design pattern (Wrapper) that allows the interface of an existing class to be used as another interface. Through adapter pattern, we can encourage a high level of consistency of the response outputs. Furthermore, it helps to decouple the classes and allow us to reuse existing codes.
Ducky king object adapter.png

4. Object Relational Mapping (ORM)
ORM is a programming technique that allows for the conversion of incompatible types in object-oriented programming languages, especially between a data store and programming objects.
SQL codes and related wrapper classes requires a significant amount of effort to develop. Furthermore, a prior design of the database is required as well. When using ORM libraries/tools, it will reduce the need of the database design as the tool is able to create the relational model within the database based upon the declared models within the codebase. Additionally, the query is done at the model level. ORM libraries/tools provide functional calls to allow for Create-Read-Update-Delete (CRUD) operation. This means that there is no need for any SQL codes to be written. ORM allows the queries to be done at the host object-oriented programming language level. Thus, there is no need for SQL codes to be written.
By leveraging upon ORM libraries/tools, the code based is reduced and it allows for greater degree of code reuse by leveraging upon existing ORM libraries/tools.

5. Logging
Logging is the recording of implementation level events that happen as the program is running (methods get called, objects are created, etc.). From a maintenance perspective, the logs serves as an important record of events which occurred before the application encounters an error. It gives a greater degree of visibility of the workings of the application, thus, it allows errors/issues to be identified and isolated quickly.
The logs is configured to captured all domain-related events. For example, a “Create Auction” transaction is logged and stored. In the event of potential misuse or incident, the logs forms an audit trail of the sequence of events. This will greatly assist any investigation efforts for any potential breach or malicious behavior by users.

6. Internal Testing
Our team has come up with the following testing lifecycle for our internal testing process. More details can be found on out internal testing wiki page here
DK Test Lifecycle.png
7. UAT
In order to ensure quality of our developed products, we have completed 2 rounds of User Acceptance Tests, which can be shown below:
The UAT details for FlowLabs Middleware can be found here
The UAT details for FlowAdmin Dashboard can be found here

Sponsor's Testimonial

Value to sponsor DuckyKing.png



DuckyKing Footer.png