HeaderSIS.jpg

IS480 Team wiki: 2012T1 M.O.O.T/Project Management/GR Metrics

From IS480
Jump to navigation Jump to search

Home

Team/Project Partners

Project Overview

Project Management

Design Specifications

Technical Applications


Project Schedule Schedule & Bug Metrics Gender Recognition Metrics Risk Management Minutes Repository

Gender Recognition Metrics (Progress Over Time)

Iteration Input Parameters Input Type Training Set Size Neural Network Layers Learning Rate Momentum Stop Training Gender Prediction Accuracy Results Source
2 Shoulder length, height, hip length Kinect pixel values - - 0.1 - - 50% Baseline
3 45 2 0.1 2.0 Error < 2.1 87.14% User Testing 1
4 Physical world values 45 2 0.03 0.7 Iteration > 10000 82.22% Weka (Leave-one-out cross-validation)
5 45 2 0.03 0.7 Iteration > 10000 82.22% Weka (Leave-one-out cross-validation)
6 45 2 0.03 0.7 Iteration > 10000 82.22% Weka (Leave-one-out cross-validation)
7 90 2 0.03 0.7 Error < 3.0 73% User Testing 2

After several rounds of fine-tuning of the parameters of Neural Network, we found that the optimal error threshold is 3 when the optimal learning rate (eta) is 0.03 and momentum (alpha) is 0.7.

Details of how we fine-tune the Neural Network parameters can be found in Neural Network Fine-Tuning Details

More information on Neural Network can also be found at IS480 Knowledge Base and our Midterm Wiki

Gender Recognition Metrics (Vary Training Set Size)

Objective

Find the optimum training set size that yields the highest gender prediction accuracy

Testing Technique

Leave-one-out cross-validation has been used to test the accuracy of the neural network's ability to recognise gender. This testing technique involves partitioning a data set into two parts: one part will be the training set, and the other will be the validation set. The neural network is trained using the training set, and a prediction is made using the validation set. In the case of leave-one-out cross-validation, each observation is rotated through once to be used as validation data. An average of all results is then computed to arrive at the percentage of correctly classified instances for a particular data set. This is similar to k-fold cross-validation, with k representing the number of observations in the data set. Although leave-one-out cross-validation is computationally expensive, it is still used because thorough testing is carried out to ensure that all observations are validated.

Results

Details of training set & results can be found in Weka Testing Details

Training Set Size No. of Male Training Set No. of Female Training Set Gender Prediction Accuracy
10 5 5 70%
20 10 10 85%
30 15 15 86.67%
40 20 20 85%
50 25 25 82%
60 30 30 90%
70 35 35 90%
80 40 40 85%
90 41 49 85.56%

Procedure

1) Run Weka Explorer
Weka GUI Chooser

2) Select the Preprocess tab
Weka Explorer > Preprocess

3) Open a file that is pre-populated with the input and output parameters
Weka Explorer > Preprocess > Open File

4) Select Class: predicted_gender (Nom) to view the breakdown of males and females in the training set
Weka Explorer > Preprocess > Class: predicted_gender (Nom)

5) Select the Classify tab
Weka Explorer > Classify

6) Select Choose > Functions > MultilayerPerceptron
Weka Explorer > Classify > Choose > MultilayerPerceptron

7) Click on MultilayerPerceptron and configure the neural network parameters
Weka Explorer > Classify > MultilayerPerceptron > Properties

8) Select More options and ensure that all outputs are shown
Weka Explorer > Classify > More options

9) Select (Nom) predicted_gender as the neural network's output
Weka Explorer > Classify > (Nom) predicted_gender

10) View the results
Weka Explorer > Classify > Result

Gender Recognition Metrics (Progress of improving input)

As gender recognition using Kinect is a newly-ventured area and there is no API available in the Kinect SDK that is able to detect gender, it is essential for us to use some form of metrics to gauge the progress of our gender recognition algorithm. To ensure a fair test, we made sure our testers included a good mix of males and females of different body proportions and wearing different kinds of clothes.

We plan to have 30 testers – 15 males and 15 females and will record a short video of all 30 individuals standing in front of AlterSense using Kinect Studio. As Kinect Studio is a tool that allows the recording of a session with the kinect and playing back of the recorded session, we could replay the videos on Kinect Studio each time we have made progress on our gender recognition algorithm. The usage of the same testers for each testing ensures that we can have a good comparison on the before and after of our algorithm.

How We Detect Gender

Initially, we had 5 parameters to determine gender:

  • Height

It is widely known that males are generally taller than females. Also, as East Asians tend to be of a smaller build than Caucasians, we took 1.7m to differentiate between males and females i.e. majority of females tend to be shorter than 1.7m while many males tend to be taller than 1.7m. There are exceptions to this as there exists several females who are taller than 1.7m and males shorter than 1.7m, hence we have taken other factors into consideration.

  • Presence of bag

It is observed that only females would carry their bags on their elbow. Thus when it is detected that a person is carrying a bag on the elbow, it is highly possible that the person is a female.

  • Presence of long hair

The majority of people who have long hair in Singapore are normally female. There may be a few exceptions but it is very very rare. By calculating the width of a person’s neck, we can be quite sure that the outline of a person with wider neck than usual is a female as the long hair is the one contributing to the width of the neck.

  • Presence of long skirt

Currently, females are the only ones who would wear a skirt in Singapore. By calculating the width of the hem of the skirt and the slope of the skirt, we can distinguish skirts from shorts and pants as shorts and pants do not have a significant slope compared to skirts. Unfortunately, it is not possible to differentiate shorts and pants from a tight-fitting skirt.

  • Shoulder width & Center of moment

Based on our research, males are proven to have a larger center of moment value than females given similar height and weight. This is because males tend to have broader shoulder than females while females generally have wider hips than males. Based on this, we attempt to differentiate a male from a female of similar build, has short hair and does not wear a skirt nor carry a bag on her elbow by their center of moment value.

After a round of testing during our User Testing 1, we have decided to drop the detection of bag parameter because Kinect is unable to track the arm coordinates of a tester accurately when the tester is carrying a bag on his/her shoulders as the Kinect would detect the bag as part of the arm. Hence we would have difficulties determining whether a person is carrying a bag on the elbow.

Therefore we are left with 4 parameters to determine gender currently:

  • Height
  • Presence of long hair
  • Presence of long skirt
  • Shoulder width & Center of moment

Testers' profile

Based on the 5 (now 4) parameters listed above, we pick our testers in the following way:

Parameter Female Male
Height 5 with height ≤ 1.6m
5 with height > 1.6m but < 1.7m
5 with height ≥ 1.7m
5 with height ≤ 1.7m
5 with height > 1.7m, but < 1.8m
5 with height ≥ 1.8m
Bag [dropped] 8 carrying bag on their elbow
7 not carrying bag on their elbow
N.A
Hair 7 with short hair (does not touch the shoulders)
8 with shoulder-length or longer hair
N.A
Skirt 7 not wearing skirt
8 wearing skirt
N.A
Center of moment (COM) We will attempt to find ≥5 pairs of females and males that have similar height and similar body proportions

Parameter Metrics

Click to see the details of Gender Recognition Metrics: Gender Recognition Metrics

Version Changes made to algorithm Accuracy of presence of skirt Accuracy of presence of hair Accuracy of presence of bag Overall accuracy of gender detected Comment
1 5 parameters are done but only 2 parameters - height and shoulder & hip width are integrated in the Neural Network. (meaning gender is determined by height and shoulder & hip width for this version) 62.86% 38.57% 22.86% 87.14%

Tested in UT1

Currently the gender recognition method is accurate only when the tester is standing at a specific distance from the Kinect.

We have decided to drop the detection of bag parameter due to 2 reasons stated here.

2 The other 2 parameters - hair and skirt are integrated in the Neural Network 63% 50% - 69% Due to the low accuracy that the hair and skirt parameters have, they were affecting the final accuracy of gender prediction when they were passed in as biased inputs into the Neural Network.

This is because a biased input has a much higher weightage than a non-biased parameter in influencing the final outcome of gender prediction.
Therefore the final accuracy has dropped significantly.
We found that gender prediction works fine with just the three non-biased parameters (height, shoulder & hip width) as seen above - with the accuracy reaching 87%.
Thus there was no need to introduce any biased parameter to influence the final outcome of gender prediction.

Overall accuracy Action plan
Accuracy ≤ 50% Hold team meeting to consider dropping gender recognition if accuracy still does not improve after several iterations
50% < Accuracy ≤ 70% Search for more ways to improve the accuracy either by improving the current methods used to detect gender or look for new parameters that prove useful for gender recognition
Accuracy > 70% May consider improvising gender recognition algorithm but not deemed necessary