Compare commits

64 Commits

Author SHA1 Message Date
Sem van der Hoeven
a8996f63ef [ADD] start game scene fingers 2021-06-08 16:06:46 +02:00
Sem van der Hoeven
88252f4dc8 [ADD] static skin treshold 2021-06-08 15:35:03 +02:00
Sem van der Hoeven
27594d466b [ADD] better info on camera 2021-06-08 15:11:54 +02:00
Sem van der Hoeven
5e137faef5 [ADD] up left and right detection regions 2021-06-08 14:48:46 +02:00
Sem van der Hoeven
cadee7d8e9 [ADD] hand detection type enum 2021-06-08 13:38:47 +02:00
Sem van der Hoeven
1e55736615 [ADD] multiple hand detection squares 2021-06-08 13:17:07 +02:00
Sem van der Hoeven
ef470bd4f1 [ADD] start of multiple squares 2021-06-08 11:54:48 +02:00
Sem van der Hoeven
afd3e00ddb [ADD] contour of hand in calibration screen 2021-06-08 11:03:22 +02:00
Sem van der Hoeven
bb68d98bfe [ADD] comments 2021-06-08 10:43:59 +02:00
Sem van der Hoeven
e70d2ef19d [EDIT] removed unused hand stuff 2021-06-04 16:32:30 +02:00
Sem van der Hoeven
1ab5ae798e [ADD] hand calibration screen 2021-06-04 16:27:30 +02:00
Sem van der Hoeven
e4b5dc39c0 [EDIT] change variable names in compliance with code style guide 2021-06-04 15:10:19 +02:00
Sem van der Hoeven
b80653b668 [EDIT] change method names in compliance with code style guide 2021-06-04 15:07:36 +02:00
Sem van der Hoeven
ca2959bf2d [FIX] detecting hand when its just a finger or hair 2021-06-04 14:48:19 +02:00
Sem van der Hoeven
81dec3b9f4 [ADD] detecting if hand is in square 2021-06-04 13:10:11 +02:00
Sem van der Hoeven
f5926fffcb [ADD] detecting if hand is in square 2021-06-04 13:09:51 +02:00
Sem van der Hoeven
4c49895f6d [MERGE] async compvision 2021-06-04 12:29:36 +02:00
Sem van der Hoeven
921609de5d [EDIT] stuff 2021-06-04 11:39:23 +02:00
Sem van der Hoeven
fe94b0f83d [FIX] showing of pose detection points 2021-06-04 10:55:53 +02:00
Sem van der Hoeven
ab30c41bee [FIX] crashing with pose detection 2021-06-02 10:41:50 +02:00
Sem van der Hoeven
1a149b8b7e [ADD] static camera instance 2021-06-02 10:05:09 +02:00
Sem van der Hoeven
cc7cb37840 [ADD] caffemodel project entry 2021-06-02 09:44:30 +02:00
Menno
ef466c9d95 Merge branch 'feature/timer' into develop 2021-06-01 11:41:43 +02:00
Menno
f03cc485cd [ADDED] timer class 2021-06-01 11:41:21 +02:00
Menno
739b4a9eb6 [FIXED] merge 2021-05-28 16:17:15 +02:00
Menno
0c654a51b9 Merge branch 'feature/collision' into develop 2021-05-28 16:14:49 +02:00
Menno
ef058b0087 [FIXED] merge 2021-05-28 16:12:51 +02:00
Menno
1ef0d87437 Merge remote-tracking branch 'origin/feature/adding_scenes' into feature/collision 2021-05-28 16:08:35 +02:00
Lars
5d31327a47 [ADD] The scene switching works now, the only thing to do is controling the scenes with the keys! 2021-05-28 16:04:41 +02:00
Menno
e8b3e1b482 [FEATURE] collisions!!!!!!!!!!! YAY 2021-05-28 15:45:46 +02:00
Sem van der Hoeven
40529f84b3 [ADD] basis for async arm detection 2021-05-28 15:31:21 +02:00
Jasper
a68c6a57bf [EDIT] edited file 2021-05-28 12:32:10 +02:00
Jasper
078a6ce66d [ADD] added all the files 2021-05-28 12:27:12 +02:00
Lars
93b3223737 [testing] shader ddoesnt work, still on it 2021-05-28 12:08:12 +02:00
Menno
28400fb320 [ADDED] simple collision logic for entities 2021-05-28 11:37:21 +02:00
Sem van der Hoeven
f1f1aac93d [ADD] comments 2021-05-25 15:54:02 +02:00
Menno
51cdc520e0 [EDIT] camera controls 2021-05-25 15:51:29 +02:00
Sem van der Hoeven
563f465e2c [EDIT] remove unused methods 2021-05-25 15:46:53 +02:00
Menno
cc7fae5d2f [ADD] comments 2021-05-25 14:55:27 +02:00
Sem van der Hoeven
05ae8ee019 [FEATURE] finished hand open/closed recognition 2021-05-25 14:49:04 +02:00
Menno
977d377fe5 [FEATURE] working gui buttons 2021-05-25 14:38:35 +02:00
Sem van der Hoeven
3696e2eb30 [EDIT] improve hand detection with mask 2021-05-25 14:19:18 +02:00
Sem van der Hoeven
276aa1a449 [ADD] mask methods 2021-05-25 13:31:25 +02:00
Menno
21a7f4f4b2 [FEATURE] simple GUI support 2021-05-25 12:36:58 +02:00
Sem van der Hoeven
ad4075a826 [EDIT] change window size to ints 2021-05-25 10:19:33 +02:00
Sem van der Hoeven
e50cd92a35 [ADD] some headers 2021-05-25 10:16:58 +02:00
Menno
97a7501cda [FEATURE] multiple light support 2021-05-21 16:25:39 +02:00
Sem van der Hoeven
ff79c1525c Merge branch 'feature/objectdetection' into develop 2021-05-21 15:25:29 +02:00
Nathalie Seen
a7597c8d4f [ADD] comments to backgroundRemover 2021-05-21 15:24:06 +02:00
Sem van der Hoeven
5b4d9b624f Merge branch 'feature/objectdetection' into develop 2021-05-21 15:22:38 +02:00
Menno
bd227d3afe Merge remote-tracking branch 'origin/feature/rendering-engine-expansion-comments' into feature/rendering-engine-expansion 2021-05-21 15:22:27 +02:00
Sem van der Hoeven
27aca98ea4 Merge branch 'feature/comments' into feature/objectdetection 2021-05-21 15:22:07 +02:00
Lars
e2464ec8fd [comment] commented some temporary code in the main and commented a new method that gets the size of a model. 2021-05-21 15:21:27 +02:00
Sem van der Hoeven
ca591dd427 [ADD] comments to fingercount 2021-05-21 15:21:03 +02:00
Jasper
8720e50ba8 [ADD] added comments to the classes FaceDetector and ObjectDetection 2021-05-21 15:10:05 +02:00
Menno
bb4fcbc97b [EDIT] comments on each function 2021-05-21 15:09:07 +02:00
Sem van der Hoeven
acf24cab36 [ADD] comments to skindetector 2021-05-21 14:56:45 +02:00
Menno
e10aea5a15 [ADDED] smoke and some small changes 2021-05-21 14:12:30 +02:00
Sem van der Hoeven
1811bf51a4 [EDIT] added base for hand detection 2021-05-21 13:23:33 +02:00
Jasper
27a09aeca4 [EDIT] added evrything to namespace, also fixed includes 2021-05-21 12:12:42 +02:00
Jasper
e39cb1a761 [ADD] added handy files 2021-05-21 11:52:47 +02:00
Menno
e2f6bd720d [EDIT] renaming the static_shader 2021-05-21 08:43:32 +02:00
Menno
9e9d50da9e [FEATURE] single light support 2021-05-21 08:36:04 +02:00
Menno
01571d191f [FIXED] openCV included 2021-05-18 14:05:56 +02:00
62 changed files with 37300 additions and 431 deletions

2
.gitignore vendored
View File

@@ -428,4 +428,6 @@ FodyWeavers.xsd
**/docs/*
**/doc/*
**/pose_iter_160000.caffemodel
# End of https://www.toptal.com/developers/gitignore/api/c++,visualstudio,visualstudiocode,opencv

4495
res/House.obj Normal file

File diff suppressed because it is too large Load Diff

BIN
res/Mayo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 323 KiB

BIN
res/Texture.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 970 B

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

31
src/collision/collision.h Normal file
View File

@@ -0,0 +1,31 @@
#pragma once
#include <glm/gtc/matrix_transform.hpp>
#include "../entities/entity.h"
namespace collision
{
/*
* This structure represents a collision box inside the world.
*
* center_pos: The center position of the collision box
* size: The size in each axis of the collision box
*/
struct Box
{
glm::vec3 center_pos;
glm::vec3 size;
};
/*
* This structure represents a collision between 2 entities
*
* entity1: The first entity
* entity2: The second entity
*/
struct Collision
{
entities::Entity& entity1;
entities::Entity& entity2;
};
}

View File

@@ -0,0 +1,40 @@
#include "collision_handler.h"
namespace collision
{
void CheckCollisions(std::vector<entities::CollisionEntity*>& entities)
{
if (entities.size() == 2)
{
if (entities[0]->IsColliding(*entities[1]))
{
collision::Collision c = { *entities[0], *entities[1] };
entities[0]->OnCollide(c);
entities[1]->OnCollide(c);
}
}
for (int i = 0; i < entities.size() - 2; i++)
{
entities::CollisionEntity* entity = entities[i];
for (int j = i + 1; i < entities.size() - 1; j++)
{
entities::CollisionEntity* entity2 = entities[j];
if (entity == entity2)
{
continue;
}
if (entity->IsColliding(*entity2))
{
collision::Collision c = { *entity, *entity2 };
entity->OnCollide(c);
entity2->OnCollide(c);
break;
}
}
}
}
}

View File

@@ -0,0 +1,16 @@
#pragma once
#include <vector>
#include "../entities/collision_entity.h"
#include "collision.h"
namespace collision
{
/*
* @brief: This function will check all the collision entities for
* collisions and call the OnCollide function when a entity collides.
*
* @param entities: A list with all the collision entities.
*/
void CheckCollisions(std::vector<entities::CollisionEntity*>& entities);
}

View File

@@ -0,0 +1,59 @@
#include "BackgroundRemover.h"
/*
Author: Pierfrancesco Soffritti https://github.com/PierfrancescoSoffritti
*/
namespace computervision
{
BackgroundRemover::BackgroundRemover(void) {
background;
calibrated = false;
}
void BackgroundRemover::calibrate(Mat input) {
cvtColor(input, background, CV_BGR2GRAY);
calibrated = true;
}
Mat BackgroundRemover::getForeground(Mat input) {
Mat foregroundMask = getForegroundMask(input);
//imshow("foregroundMask", foregroundMask);
Mat foreground;
input.copyTo(foreground, foregroundMask);
return foreground;
}
Mat BackgroundRemover::getForegroundMask(Mat input) {
Mat foregroundMask;
if (!calibrated) {
foregroundMask = Mat::zeros(input.size(), CV_8UC1);
return foregroundMask;
}
cvtColor(input, foregroundMask, CV_BGR2GRAY);
removeBackground(foregroundMask, background);
return foregroundMask;
}
void BackgroundRemover::removeBackground(Mat input, Mat background) {
int thresholdOffset = 25;
for (int i = 0; i < input.rows; i++) {
for (int j = 0; j < input.cols; j++) {
uchar framePixel = input.at<uchar>(i, j);
uchar bgPixel = background.at<uchar>(i, j);
if (framePixel >= bgPixel - thresholdOffset && framePixel <= bgPixel + thresholdOffset)
input.at<uchar>(i, j) = 0;
else
input.at<uchar>(i, j) = 255;
}
}
}
}

View File

@@ -0,0 +1,58 @@
#pragma once
#include"opencv2\opencv.hpp"
#include <opencv2/imgproc\types_c.h>
/*
Author: Pierfrancesco Soffritti https://github.com/PierfrancescoSoffritti
*/
namespace computervision
{
using namespace cv;
using namespace std;
class BackgroundRemover {
public:
/**
* @brief constructor,
* create background variable and set calibrated to faslse
*
*/
BackgroundRemover(void);
/**
* @brief sets the input image to a grayscale image
* sets calibrated to true
*
* @param input input the image that has to be calibrated
*/
void calibrate(Mat input);
/**
* @brief Gets the mask of the foregorund of the input image
* and copies it to another image
*
* @param input The image from which the forground needs to be picked
* @return The image on which te foregroundmask is copied
*/
Mat getForeground(Mat input);
private:
Mat background;
bool calibrated = false;
/**
* @brief Sets the image to grayscale and removes the background
*
* @param input The image from which the forground needs to be picked
* @return The mask of the foreground of the image
*/
Mat getForegroundMask(Mat input);
/**
* @brief makes everything on the background black
*
* @param input the image from which the background needs to be removed
* @param background the background of the image
*/
void removeBackground(Mat input, Mat background);
};
}

View File

@@ -0,0 +1,301 @@
#include "FingerCount.h"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
/*
Author: Nicol<6F> Castellazzi https://github.com/nicast
*/
#define LIMIT_ANGLE_SUP 60
#define LIMIT_ANGLE_INF 5
#define BOUNDING_RECT_FINGER_SIZE_SCALING 0.3
#define BOUNDING_RECT_NEIGHBOR_DISTANCE_SCALING 0.05
namespace computervision
{
FingerCount::FingerCount(void) {
color_blue = Scalar(255, 0, 0);
color_green = Scalar(0, 255, 0);
color_red = Scalar(0, 0, 255);
color_black = Scalar(0, 0, 0);
color_white = Scalar(255, 255, 255);
color_yellow = Scalar(0, 255, 255);
color_purple = Scalar(255, 0, 255);
}
Mat FingerCount::findFingersCount(Mat input_image, Mat frame) {
Mat contours_image = Mat::zeros(input_image.size(), CV_8UC3);
// check if the source image is good
if (input_image.empty())
return contours_image;
// we work only on the 1 channel result, since this function is called inside a loop we are not sure that this is always the case
if (input_image.channels() != 1)
return contours_image;
findContours(input_image, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// we need at least one contour to work
if (contours.size() <= 0)
return contours_image;
// find the biggest contour (let's suppose it's our hand)
biggest_contour_index = -1;
double biggest_area = 0.0;
for (int i = 0; i < contours.size(); i++) {
double area = contourArea(contours[i], false);
if (area > biggest_area) {
biggest_area = area;
biggest_contour_index = i;
}
}
if (biggest_contour_index < 0)
return contours_image;
// find the convex hull object for each contour and the defects, two different data structure are needed by the OpenCV api
vector<Point> hull_points;
vector<int> hull_ints;
// for drawing the convex hull and for finding the bounding rectangle
convexHull(Mat(contours[biggest_contour_index]), hull_points, true);
// for finding the defects
convexHull(Mat(contours[biggest_contour_index]), hull_ints, false);
// we need at least 3 points to find the defects
vector<Vec4i> defects;
if (hull_ints.size() > 3)
convexityDefects(Mat(contours[biggest_contour_index]), hull_ints, defects);
else
return contours_image;
// we bound the convex hull
Rect bounding_rectangle = boundingRect(Mat(hull_points));
// we find the center of the bounding rectangle, this should approximately also be the center of the hand
Point center_bounding_rect(
(bounding_rectangle.tl().x + bounding_rectangle.br().x) / 2,
(bounding_rectangle.tl().y + bounding_rectangle.br().y) / 2
);
// we separate the defects keeping only the ones of intrest
vector<Point> start_points;
vector<Point> far_points;
for (int i = 0; i < defects.size(); i++) {
start_points.push_back(contours[biggest_contour_index][defects[i].val[0]]);
// filtering the far point based on the distance from the center of the bounding rectangle
if (findPointsDistance(contours[biggest_contour_index][defects[i].val[2]], center_bounding_rect) < bounding_rectangle.height * BOUNDING_RECT_FINGER_SIZE_SCALING)
far_points.push_back(contours[biggest_contour_index][defects[i].val[2]]);
}
// we compact them on their medians
vector<Point> filtered_start_points = compactOnNeighborhoodMedian(start_points, bounding_rectangle.height * BOUNDING_RECT_NEIGHBOR_DISTANCE_SCALING);
vector<Point> filtered_far_points = compactOnNeighborhoodMedian(far_points, bounding_rectangle.height * BOUNDING_RECT_NEIGHBOR_DISTANCE_SCALING);
// now we try to find the fingers
vector<Point> filtered_finger_points;
if (filtered_far_points.size() > 1) {
vector<Point> finger_points;
for (int i = 0; i < filtered_start_points.size(); i++) {
vector<Point> closest_points = findClosestOnX(filtered_far_points, filtered_start_points[i]);
if (isFinger(closest_points[0], filtered_start_points[i], closest_points[1], LIMIT_ANGLE_INF, LIMIT_ANGLE_SUP, center_bounding_rect, bounding_rectangle.height * BOUNDING_RECT_FINGER_SIZE_SCALING))
finger_points.push_back(filtered_start_points[i]);
}
if (finger_points.size() > 0) {
// we have at most five fingers usually :)
while (finger_points.size() > 5)
finger_points.pop_back();
// filter out the points too close to each other
for (int i = 0; i < finger_points.size() - 1; i++) {
if (findPointsDistanceOnX(finger_points[i], finger_points[i + 1]) > bounding_rectangle.height * BOUNDING_RECT_NEIGHBOR_DISTANCE_SCALING * 1.5)
filtered_finger_points.push_back(finger_points[i]);
}
if (finger_points.size() > 2) {
if (findPointsDistanceOnX(finger_points[0], finger_points[finger_points.size() - 1]) > bounding_rectangle.height * BOUNDING_RECT_NEIGHBOR_DISTANCE_SCALING * 1.5)
filtered_finger_points.push_back(finger_points[finger_points.size() - 1]);
}
else
filtered_finger_points.push_back(finger_points[finger_points.size() - 1]);
}
}
// we draw what found on the returned image
drawContours(contours_image, contours, biggest_contour_index, color_green, 2, 8, hierarchy);
polylines(contours_image, hull_points, true, color_blue);
rectangle(contours_image, bounding_rectangle.tl(), bounding_rectangle.br(), color_red, 2, 8, 0);
circle(contours_image, center_bounding_rect, 5, color_purple, 2, 8);
drawVectorPoints(contours_image, filtered_start_points, color_blue, true);
drawVectorPoints(contours_image, filtered_far_points, color_red, true);
drawVectorPoints(contours_image, filtered_finger_points, color_yellow, false);
putText(contours_image, to_string(filtered_finger_points.size()), center_bounding_rect, FONT_HERSHEY_PLAIN, 3, color_purple);
// and on the starting frame
drawContours(frame, contours, biggest_contour_index, color_green, 2, 8, hierarchy);
circle(frame, center_bounding_rect, 5, color_purple, 2, 8);
drawVectorPoints(frame, filtered_finger_points, color_yellow, false);
putText(frame, to_string(filtered_finger_points.size()), center_bounding_rect, FONT_HERSHEY_PLAIN, 3, color_purple);
amount_of_fingers = filtered_finger_points.size();
return contours_image;
}
void FingerCount::DrawHandContours(Mat& image)
{
drawContours(image, contours, biggest_contour_index, color_green, 2, 8, hierarchy);
}
int FingerCount::getAmountOfFingers()
{
return amount_of_fingers;
}
double FingerCount::findPointsDistance(Point a, Point b) {
Point difference = a - b;
return sqrt(difference.ddot(difference));
}
vector<Point> FingerCount::compactOnNeighborhoodMedian(vector<Point> points, double max_neighbor_distance) {
vector<Point> median_points;
if (points.size() == 0)
return median_points;
if (max_neighbor_distance <= 0)
return median_points;
// we start with the first point
Point reference = points[0];
Point median = points[0];
for (int i = 1; i < points.size(); i++) {
if (findPointsDistance(reference, points[i]) > max_neighbor_distance) {
// the point is not in range, we save the median
median_points.push_back(median);
// we swap the reference
reference = points[i];
median = points[i];
}
else
median = (points[i] + median) / 2;
}
// last median
median_points.push_back(median);
return median_points;
}
double FingerCount::findAngle(Point a, Point b, Point c) {
double ab = findPointsDistance(a, b);
double bc = findPointsDistance(b, c);
double ac = findPointsDistance(a, c);
return acos((ab * ab + bc * bc - ac * ac) / (2 * ab * bc)) * 180 / CV_PI;
}
bool FingerCount::isFinger(Point a, Point b, Point c, double limit_angle_inf, double limit_angle_sup, Point palm_center, double min_distance_from_palm) {
double angle = findAngle(a, b, c);
if (angle > limit_angle_sup || angle < limit_angle_inf)
return false;
// the finger point sohould not be under the two far points
int delta_y_1 = b.y - a.y;
int delta_y_2 = b.y - c.y;
if (delta_y_1 > 0 && delta_y_2 > 0)
return false;
// the two far points should not be both under the center of the hand
int delta_y_3 = palm_center.y - a.y;
int delta_y_4 = palm_center.y - c.y;
if (delta_y_3 < 0 && delta_y_4 < 0)
return false;
double distance_from_palm = findPointsDistance(b, palm_center);
if (distance_from_palm < min_distance_from_palm)
return false;
// this should be the case when no fingers are up
double distance_from_palm_far_1 = findPointsDistance(a, palm_center);
double distance_from_palm_far_2 = findPointsDistance(c, palm_center);
if (distance_from_palm_far_1 < min_distance_from_palm / 4 || distance_from_palm_far_2 < min_distance_from_palm / 4)
return false;
return true;
}
vector<Point> FingerCount::findClosestOnX(vector<Point> points, Point pivot) {
vector<Point> to_return(2);
if (points.size() == 0)
return to_return;
double distance_x_1 = DBL_MAX;
double distance_1 = DBL_MAX;
double distance_x_2 = DBL_MAX;
double distance_2 = DBL_MAX;
int index_found = 0;
for (int i = 0; i < points.size(); i++) {
double distance_x = findPointsDistanceOnX(pivot, points[i]);
double distance = findPointsDistance(pivot, points[i]);
if (distance_x < distance_x_1 && distance_x != 0 && distance <= distance_1) {
distance_x_1 = distance_x;
distance_1 = distance;
index_found = i;
}
}
to_return[0] = points[index_found];
for (int i = 0; i < points.size(); i++) {
double distance_x = findPointsDistanceOnX(pivot, points[i]);
double distance = findPointsDistance(pivot, points[i]);
if (distance_x < distance_x_2 && distance_x != 0 && distance <= distance_2 && distance_x != distance_x_1) {
distance_x_2 = distance_x;
distance_2 = distance;
index_found = i;
}
}
to_return[1] = points[index_found];
return to_return;
}
double FingerCount::findPointsDistanceOnX(Point a, Point b) {
double to_return = 0.0;
if (a.x > b.x)
to_return = a.x - b.x;
else
to_return = b.x - a.x;
return to_return;
}
void FingerCount::drawVectorPoints(Mat image, vector<Point> points, Scalar color, bool with_numbers) {
for (int i = 0; i < points.size(); i++) {
circle(image, points[i], 5, color, 2, 8);
if (with_numbers)
putText(image, to_string(i), points[i], FONT_HERSHEY_PLAIN, 3, color);
}
}
}

View File

@@ -0,0 +1,129 @@
#pragma once
#include "opencv2/core.hpp"
#include <opencv2/imgproc/types_c.h>
/*
Author: Nicol<6F> Castellazzi https://github.com/nicast
*/
namespace computervision
{
using namespace cv;
using namespace std;
class FingerCount {
public:
FingerCount(void);
/**
* @brief gets the amount of fingers that are held up.
*
* @param input_image the source image to find the fingers on. It should be a mask of a hand
* @param frame the frame to draw the resulting values on (how many fingers are held up etc)
* @return a new image with all the data drawn on it.
*/
Mat findFingersCount(Mat input_image, Mat frame);
/**
* @brief gets the currently held-up finger count.
*
* @return the currently held-up finger count
*/
int getAmountOfFingers();
void DrawHandContours(Mat& image);
private:
int biggest_contour_index;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
// colors to use
Scalar color_blue;
Scalar color_green;
Scalar color_red;
Scalar color_black;
Scalar color_white;
Scalar color_yellow;
Scalar color_purple;
int amount_of_fingers;
/**
* @brief finds the distance between 2 points.
*
* @param a the first point
* @param b the second point
* @return a double representing the distance
*/
double findPointsDistance(Point a, Point b);
/**
* @brief compacts the given points on their medians.
* what it does is for each point, it checks if the distance to it's neighbour is greater than the
* max distance. If so, it just adds it to the list that is returned. If not, it calculates the
* median and adds it to the returned list
*
* @param points the points to compact
* @param max_neighbor_distance the maximum distance between points
* @return a vector with the points now compacted.
*/
vector<Point> compactOnNeighborhoodMedian(vector<Point> points, double max_neighbor_distance);
/**
* @brief finds the angle between 3 different points.
*
* @param a the first point
* @param b the second point
* @param c the third point
* @return the angle between the 3 points
*/
double findAngle(Point a, Point b, Point c);
/**
* @brief checks if the given points make up a finger.
*
* @param a the first point to check for
* @param b the second point to check for
* @param c the third point to check for
* @param limit_angle_inf the limit of the angle between 2 fingers
* @param limit_angle_sup the limit of the angle between a finger and a convex point
* @param palm_center the center of the palm
* @param distance_from_palm_tollerance the distance from the palm tolerance
* @return true if the points are a finger, false if not.
*/
bool isFinger(Point a, Point b, Point c, double limit_angle_inf, double limit_angle_sup, cv::Point palm_center, double distance_from_palm_tollerance);
/**
* @brief finds the closest point to the given point that is in the given list.
*
* @param points the points to check for
* @param pivot the pivot to check against
* @return a vector containing the point that is closest
*/
vector<Point> findClosestOnX(vector<Point> points, Point pivot);
/**
* @brief finds the distance between the x coords of the points.
*
* @param a the first point
* @param b the second point
* @return the distance between the x values
*/
double findPointsDistanceOnX(Point a, Point b);
/**
* @brief draws the points on the image.
*
* @param image the image to draw on
* @param points the points to draw
* @param color the color to draw them with
* @param with_numbers if the numbers should be drawn with the points
*/
void drawVectorPoints(Mat image, vector<Point> points, Scalar color, bool with_numbers);
};
}

View File

@@ -0,0 +1,105 @@
#include "HandDetectRegion.h"
namespace computervision
{
HandDetectRegion::HandDetectRegion(std::string id,int x_pos, int y_pos, int width, int height)
{
region_id = id;
start_x_pos = x_pos;
start_y_pos = y_pos;
region_width = width;
region_height = height;
hand_mask_generated = false;
hand_present = false;
}
void HandDetectRegion::DetectHand(cv::Mat& camera_frame)
{
Mat input_frame = GenerateHandMaskSquare(camera_frame);
frame_out = input_frame.clone();
// detect skin color
skin_detector.drawSkinColorSampler(camera_frame,start_x_pos,start_y_pos,region_width,region_height);
// remove background from image
foreground = background_remover.getForeground(input_frame);
// detect the hand contours
handMask = skin_detector.getSkinMask(foreground);
// draw the hand rectangle on the camera input, and draw text showing if the hand is open or closed.
DrawHandMask(&camera_frame);
//imshow("output" + region_id, frame_out);
//imshow("foreground" + region_id, foreground);
//imshow("handMask" + region_id, handMask);
/*imshow("handDetection", fingerCountDebug);*/
hand_present = hand_calibrator.CheckIfHandPresent(handMask,handcalibration::HandDetectionType::GAME);
//std::string text = (hand_present ? "hand" : "no");
//cv::putText(camera_frame, text, cv::Point(start_x_pos, start_y_pos), cv::FONT_HERSHEY_COMPLEX, 2.0, cv::Scalar(0, 255, 255), 2);
hand_calibrator.SetHandPresent(hand_present);
//draw black rectangle behind calibration information text
cv::rectangle(camera_frame, cv::Rect(0, camera_frame.rows - 55, 450, camera_frame.cols), cv::Scalar(0, 0, 0), -1);
hand_calibrator.DrawBackgroundSkinCalibrated(camera_frame);
}
cv::Mat HandDetectRegion::GenerateHandMaskSquare(cv::Mat img)
{
cv::Mat mask = cv::Mat::zeros(img.size(), img.type());
cv::Mat distance_img = cv::Mat::zeros(img.size(), img.type());
cv::rectangle(mask, cv::Rect(start_x_pos, start_y_pos, region_width, region_height), cv::Scalar(255, 255, 255), -1);
img.copyTo(distance_img, mask);
hand_mask_generated = true;
return distance_img;
}
bool HandDetectRegion::DrawHandMask(cv::Mat* input)
{
if (!hand_mask_generated) return false;
rectangle(*input, Rect(start_x_pos, start_y_pos, region_width, region_height), (hand_present ? Scalar(0, 255, 0) : Scalar(0,0,255)),2);
return true;
}
bool HandDetectRegion::IsHandPresent()
{
return hand_present;
}
void HandDetectRegion::CalibrateBackground()
{
std::cout << "calibrating background " << region_id << std::endl;
background_remover.calibrate(frame_out);
hand_calibrator.SetBackGroundCalibrated(true);
}
void HandDetectRegion::CalibrateSkin()
{
skin_detector.calibrate(frame_out);
hand_calibrator.SetSkinCalibration(true);
}
std::vector<int> HandDetectRegion::CalculateSkinTresholds()
{
std::cout << "calibrating skin " << region_id << std::endl;
hand_calibrator.SetSkinCalibration(true);
return skin_detector.calibrateAndReturn(frame_out);
}
void HandDetectRegion::setSkinTresholds(std::vector<int>& tresholds)
{
std::cout << "setting skin " << region_id << std::endl;
skin_detector.setTresholds(tresholds);
hand_calibrator.SetSkinCalibration(true);
}
}

View File

@@ -0,0 +1,56 @@
#pragma once
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include "async/StaticCameraInstance.h"
#include "calibration/HandCalibrator.h"
#include "BackgroundRemover.h"
#include "SkinDetector.h"
#include "FingerCount.h"
namespace computervision
{
class HandDetectRegion
{
public:
HandDetectRegion(std::string id,int x_pos, int y_pos, int width, int height);
void SetXPos(int x) { start_x_pos = x; }
void SetYPos(int y) { start_y_pos = y; }
int GetXPos() { return start_x_pos; }
int GetYPos() { return start_y_pos; }
void SetWidth(int width) { region_width = width; }
void SetHeigth(int height) { region_height = height; }
int GetWidth() { return region_width; }
int GetHeight() { return region_height; }
cv::Mat GenerateHandMaskSquare(cv::Mat img);
void DetectHand(cv::Mat& camera_frame);
bool IsHandPresent();
void CalibrateBackground();
void CalibrateSkin();
std::vector<int> CalculateSkinTresholds();
void setSkinTresholds(std::vector<int>& tresholds);
private:
int start_x_pos;
int start_y_pos;
int region_height;
int region_width;
bool hand_mask_generated;
bool hand_present;
cv::Mat frame, frame_out, handMask, foreground, fingerCountDebug;
BackgroundRemover background_remover;
SkinDetector skin_detector;
handcalibration::HandCalibrator hand_calibrator;
std::string region_id;
bool DrawHandMask(cv::Mat* input);
};
}

View File

@@ -1,33 +1,147 @@
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>
#include "ObjectDetection.h"
#include "BackgroundRemover.h"
#include "SkinDetector.h"
#include "FingerCount.h"
#include "async/StaticCameraInstance.h"
#include "calibration/HandCalibrator.h"
namespace computervision
{
cv::VideoCapture cap(0);
cv::Mat img, imgGray, img2, img2Gray, img3, img4;
cv::Mat img, img_gray, img2, img2_gray, img3, img4;
int hand_mask_start_x_pos, hand_mask_start_y_pos, hand_mask_width, hand_mask_height;
bool hand_mask_generated = false;
Mat frame, frame_out, handMask, foreground, fingerCountDebug;
BackgroundRemover background_remover;
SkinDetector skin_detector;
FingerCount finger_count;
handcalibration::HandCalibrator hand_calibrator;
cv::VideoCapture cap = static_camera::getCap();
ObjectDetection::ObjectDetection()
{
}
void ObjectDetection::readWebcam()
{
cv::Mat ObjectDetection::ReadCamera() {
cap.read(img);
return img;
}
void ObjectDetection::calculateDifference()
cv::VideoCapture ObjectDetection::GetCap()
{
return cap;
}
bool ObjectDetection::DetectHand(Mat camera_frame, bool& hand_present)
{
Mat input_frame = GenerateHandMaskSquare(camera_frame);
frame_out = input_frame.clone();
// detect skin color
skin_detector.drawSkinColorSampler(camera_frame);
// remove background from image
foreground = background_remover.getForeground(input_frame);
// detect the hand contours
handMask = skin_detector.getSkinMask(foreground);
// count the amount of fingers and put the info on the matrix
fingerCountDebug = finger_count.findFingersCount(handMask, frame_out);
// get the amount of fingers
int fingers_amount = finger_count.getAmountOfFingers();
// draw the hand rectangle on the camera input, and draw text showing if the hand is open or closed.
DrawHandMask(&camera_frame);
hand_calibrator.SetAmountOfFingers(fingers_amount);
finger_count.DrawHandContours(camera_frame);
hand_calibrator.DrawHandCalibrationText(camera_frame);
imshow("camera", camera_frame);
/*imshow("output", frame_out);
imshow("foreground", foreground);
imshow("handMask", handMask);
imshow("handDetection", fingerCountDebug);*/
hand_present = hand_calibrator.CheckIfHandPresent(handMask,handcalibration::HandDetectionType::MENU);
hand_calibrator.SetHandPresent(hand_present);
int key = waitKey(1);
if (key == 98) // b, calibrate the background
{
background_remover.calibrate(input_frame);
hand_calibrator.SetBackGroundCalibrated(true);
}
else if (key == 115) // s, calibrate the skin color
{
skin_detector.calibrate(input_frame);
hand_calibrator.SetSkinCalibration(true);
}
return fingers_amount > 0;
}
void ObjectDetection::CalculateDifference()
{
cap.read(img);
cap.read(img2);
cv::cvtColor(img, imgGray, cv::COLOR_RGBA2GRAY);
cv::cvtColor(img2, img2Gray, cv::COLOR_RGBA2GRAY);
cv::cvtColor(img, img_gray, cv::COLOR_RGBA2GRAY);
cv::cvtColor(img2, img2_gray, cv::COLOR_RGBA2GRAY);
cv::absdiff(imgGray, img2Gray, img3);
cv::absdiff(img_gray, img2_gray, img3);
cv::threshold(img3, img4, 50, 170, cv::THRESH_BINARY);
imshow("threshold", img4);
}
void ObjectDetection::showWebcam()
cv::Mat ObjectDetection::GenerateHandMaskSquare(cv::Mat img)
{
hand_mask_start_x_pos = 20;
hand_mask_start_y_pos = img.rows / 5;
hand_mask_width = img.cols / 3;
hand_mask_height = img.cols / 3;
cv::Mat mask = cv::Mat::zeros(img.size(), img.type());
cv::Mat distance_img = cv::Mat::zeros(img.size(), img.type());
cv::rectangle(mask, Rect(hand_mask_start_x_pos, hand_mask_start_y_pos, hand_mask_width, hand_mask_height), Scalar(255, 255, 255), -1);
img.copyTo(distance_img, mask);
hand_mask_generated = true;
return distance_img;
}
bool ObjectDetection::DrawHandMask(cv::Mat* input)
{
if (!hand_mask_generated) return false;
rectangle(*input, Rect(hand_mask_start_x_pos, hand_mask_start_y_pos, hand_mask_width, hand_mask_height), Scalar(255, 255, 255));
return true;
}
void ObjectDetection::ShowWebcam()
{
imshow("Webcam image", img);
}

View File

@@ -9,6 +9,7 @@
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
namespace computervision
{
class ObjectDetection
@@ -16,10 +17,75 @@ namespace computervision
private:
public:
/**
* @brief default constructor of ObjectDetection
*
*/
ObjectDetection();
void readWebcam();
void showWebcam();
void calculateDifference();
/**
* @brief Displays an image of the current webcam-footage
*
*/
void ShowWebcam();
/**
* @brief Calculates the difference between two images
* and outputs an image that only shows the difference
*
*/
void CalculateDifference();
/**
* @brief generates the square that will hold the mask in which the hand will be detected.
*
* @param img the current camear frame
* @return a matrix containing the mask
*/
cv::Mat GenerateHandMaskSquare(cv::Mat img);
/**
* @brief reads the camera and returns it in a matrix.
*
* @return the camera frame in a matrix
*/
cv::Mat ReadCamera();
/**
* @brief detects a hand based on the given hand mask input frame.
*
* @param inputFrame the input frame from the camera
* @param hand_present boolean that will hold true if the hand is detected, false if not.
* @return true if hand is open, false if hand is closed
*/
bool DetectHand(cv::Mat camera_frame, bool& hand_present);
/**
* @brief draws the hand mask rectangle on the given input matrix.
*
* @param input the input matrix to draw the rectangle on
*/
bool DrawHandMask(cv::Mat *input);
/**
* @brief checks if the hand of the user is open.
*
* @return true if the hand is open, false if not.
*/
bool IsHandOpen();
/**
* @brief checks whether the hand is held within the detection square.
*
* @return true if the hand is in the detection square, false if not.
*/
bool IsHandPresent();
cv::VideoCapture GetCap();
private:
bool is_hand_open;
bool is_hand_present;
};

View File

@@ -0,0 +1,108 @@
#include "OpenPoseVideo.h"
using namespace std;
using namespace cv;
using namespace cv::dnn;
namespace computervision
{
#define MPI
#ifdef MPI
const int POSE_PAIRS[7][2] =
{
{0,1}, {1,2}, {2,3},
{3,4}, {1,5}, {5,6},
{6,7}
};
string protoFile = "res/pose/mpi/pose_deploy_linevec_faster_4_stages.prototxt";
string weightsFile = "res/pose/mpi/pose_iter_160000.caffemodel";
int nPoints = 8;
#endif
#ifdef COCO
const int POSE_PAIRS[17][2] =
{
{1,2}, {1,5}, {2,3},
{3,4}, {5,6}, {6,7},
{1,8}, {8,9}, {9,10},
{1,11}, {11,12}, {12,13},
{1,0}, {0,14},
{14,16}, {0,15}, {15,17}
};
string protoFile = "pose/coco/pose_deploy_linevec.prototxt";
string weightsFile = "pose/coco/pose_iter_440000.caffemodel";
int nPoints = 18;
#endif
Net net;
void OpenPoseVideo::setup() {
net = readNetFromCaffe(protoFile, weightsFile);
net.setPreferableBackend(DNN_TARGET_CPU);
}
void OpenPoseVideo::movementSkeleton(Mat& inputImage, std::function<void(std::vector<Point>&, cv::Mat& poinst_on_image)> f) {
std::cout << "movement skeleton start" << std::endl;
int inWidth = 368;
int inHeight = 368;
float thresh = 0.01;
Mat frame;
int frameWidth = inputImage.size().width;
int frameHeight = inputImage.size().height;
double t = (double)cv::getTickCount();
std::cout << "reading input image and blob" << std::endl;
frame = inputImage;
Mat inpBlob = blobFromImage(frame, 1.0 / 255, Size(inWidth, inHeight), Scalar(0, 0, 0), false, false);
std::cout << "done reading image and blob" << std::endl;
net.setInput(inpBlob);
std::cout << "done setting input to net" << std::endl;
Mat output = net.forward();
std::cout << "time took to set input and forward: " << t << std::endl;
int H = output.size[2];
int W = output.size[3];
std::cout << "about to find position of boxy parts" << std::endl;
// find the position of the body parts
vector<Point> points(nPoints);
for (int n = 0; n < nPoints; n++)
{
// Probability map of corresponding body's part.
Mat probMap(H, W, CV_32F, output.ptr(0, n));
Point2f p(-1, -1);
Point maxLoc;
double prob;
minMaxLoc(probMap, 0, &prob, 0, &maxLoc);
if (prob > thresh)
{
p = maxLoc;
p.x *= (float)frameWidth / W;
p.y *= (float)frameHeight / H;
circle(frame, cv::Point((int)p.x, (int)p.y), 8, Scalar(0, 255, 255), -1);
cv::putText(frame, cv::format("%d", n), cv::Point((int)p.x, (int)p.y), cv::FONT_HERSHEY_COMPLEX, 1.1, cv::Scalar(0, 0, 255), 2);
}
points[n] = p;
}
cv::putText(frame, cv::format("time taken = %.2f sec", t), cv::Point(50, 50), cv::FONT_HERSHEY_COMPLEX, .8, cv::Scalar(255, 50, 0), 2);
std::cout << "time taken: " << t << std::endl;
//imshow("Output-Keypoints", frame);
//imshow("Output-Skeleton", frame);
std::cout << "about to call points receiving method" << std::endl;
f(points,frame);
}
}

View File

@@ -0,0 +1,19 @@
#pragma once
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
using namespace cv;
namespace computervision
{
class OpenPoseVideo{
private:
public:
void movementSkeleton(Mat& inputImage, std::function<void(std::vector<Point>&, cv::Mat& poinst_on_image)> f);
void setup();
};
}

View File

@@ -0,0 +1,175 @@
#include "SkinDetector.h"
#include <iostream>
/*
Author: Pierfrancesco Soffritti https://github.com/PierfrancescoSoffritti
*/
namespace computervision
{
SkinDetector::SkinDetector(void) {
hLowThreshold = 0;
hHighThreshold = 0;
sLowThreshold = 0;
sHighThreshold = 0;
vLowThreshold = 0;
vHighThreshold = 0;
calibrated = false;
skinColorSamplerRectangle1, skinColorSamplerRectangle2;
}
void SkinDetector::drawSkinColorSampler(Mat input) {
int frameWidth = input.size().width, frameHeight = input.size().height;
int rectangleSize = 25;
Scalar rectangleColor = Scalar(0, 255, 255);
skinColorSamplerRectangle1 = Rect(frameWidth / 5, frameHeight / 2, rectangleSize, rectangleSize);
skinColorSamplerRectangle2 = Rect(frameWidth / 5, frameHeight / 3, rectangleSize, rectangleSize);
rectangle(
input,
skinColorSamplerRectangle1,
rectangleColor
);
rectangle(
input,
skinColorSamplerRectangle2,
rectangleColor
);
}
void SkinDetector::drawSkinColorSampler(Mat input,int x, int y,int width, int height) {
int frameWidth = width, frameHeight = height;
int rectangleSize = 25;
Scalar rectangleColor = Scalar(0, 255, 255);
skinColorSamplerRectangle1 = Rect(frameWidth / 5 + x, frameHeight / 2 + y, rectangleSize, rectangleSize);
skinColorSamplerRectangle2 = Rect(frameWidth / 5 + x, frameHeight / 3 + y, rectangleSize, rectangleSize);
rectangle(
input,
skinColorSamplerRectangle1,
rectangleColor
);
rectangle(
input,
skinColorSamplerRectangle2,
rectangleColor
);
}
void SkinDetector::calibrate(Mat input) {
Mat hsvInput;
cvtColor(input, hsvInput, CV_BGR2HSV);
Mat sample1 = Mat(hsvInput, skinColorSamplerRectangle1);
Mat sample2 = Mat(hsvInput, skinColorSamplerRectangle2);
calculateThresholds(sample1, sample2);
calibrated = true;
}
std::vector<int> SkinDetector::calibrateAndReturn(Mat input)
{
Mat hsvInput;
cvtColor(input, hsvInput, CV_BGR2HSV);
Mat sample1 = Mat(hsvInput, skinColorSamplerRectangle1);
Mat sample2 = Mat(hsvInput, skinColorSamplerRectangle2);
calibrated = true;
return calculateAndReturnTresholds(sample1, sample2);
}
void SkinDetector::calculateThresholds(Mat sample1, Mat sample2) {
int offsetLowThreshold = 80;
int offsetHighThreshold = 30;
Scalar hsvMeansSample1 = mean(sample1);
Scalar hsvMeansSample2 = mean(sample2);
hLowThreshold = min(hsvMeansSample1[0], hsvMeansSample2[0]) - offsetLowThreshold;
hHighThreshold = max(hsvMeansSample1[0], hsvMeansSample2[0]) + offsetHighThreshold;
sLowThreshold = min(hsvMeansSample1[1], hsvMeansSample2[1]) - offsetLowThreshold;
sHighThreshold = max(hsvMeansSample1[1], hsvMeansSample2[1]) + offsetHighThreshold;
// the V channel shouldn't be used. By ignorint it, shadows on the hand wouldn't interfire with segmentation.
// Unfortunately there's a bug somewhere and not using the V channel causes some problem. This shouldn't be too hard to fix.
vLowThreshold = min(hsvMeansSample1[2], hsvMeansSample2[2]) - offsetLowThreshold;
vHighThreshold = max(hsvMeansSample1[2], hsvMeansSample2[2]) + offsetHighThreshold;
//vLowThreshold = 0;
//vHighThreshold = 255;
}
std::vector<int> SkinDetector::calculateAndReturnTresholds(Mat sample1, Mat sample2)
{
calculateThresholds(sample1, sample2);
std::vector<int> res;
res.push_back(hLowThreshold);
res.push_back(hHighThreshold);
res.push_back(sLowThreshold);
res.push_back(sHighThreshold);
res.push_back(vLowThreshold);
res.push_back(vHighThreshold);
return res;
}
void SkinDetector::setTresholds(std::vector<int>& tresholds)
{
if (tresholds.size() != 6)
{
std::cout << "tresholds array not the right size!" << std::endl;
return;
}
hLowThreshold = tresholds[0];
hHighThreshold = tresholds[1];
sLowThreshold = tresholds[2];
sHighThreshold = tresholds[3];
vLowThreshold = tresholds[4];
vHighThreshold = tresholds[5];
calibrated = true;
}
Mat SkinDetector::getSkinMask(Mat input) {
Mat skinMask;
if (!calibrated) {
skinMask = Mat::zeros(input.size(), CV_8UC1);
return skinMask;
}
Mat hsvInput;
cvtColor(input, hsvInput, CV_BGR2HSV);
inRange(
hsvInput,
Scalar(hLowThreshold, sLowThreshold, vLowThreshold),
Scalar(hHighThreshold, sHighThreshold, vHighThreshold),
skinMask);
performOpening(skinMask, MORPH_ELLIPSE, { 3, 3 });
dilate(skinMask, skinMask, Mat(), Point(-1, -1), 3);
return skinMask;
}
void SkinDetector::performOpening(Mat binaryImage, int kernelShape, Point kernelSize) {
Mat structuringElement = getStructuringElement(kernelShape, kernelSize);
morphologyEx(binaryImage, binaryImage, MORPH_OPEN, structuringElement);
}
}

View File

@@ -0,0 +1,85 @@
#pragma once
#include <opencv2\core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/imgproc/types_c.h>
/*
Author: Pierfrancesco Soffritti https://github.com/PierfrancescoSoffritti
*/
namespace computervision
{
using namespace cv;
using namespace std;
class SkinDetector {
public:
SkinDetector(void);
/*
* @brief draws the positions in where the skin color will be sampled.
*
* @param input the input matrix to sample the skin color from
*/
void drawSkinColorSampler(Mat input);
void drawSkinColorSampler(Mat input, int x, int y, int width, int heigth);
/*
* @brief calibrates the skin color detector with the given input frame
*
* @param input the input frame to calibrate from
*/
void calibrate(Mat input);
std::vector<int> calibrateAndReturn(Mat input);
void setTresholds(std::vector<int>& tresholds);
/*
* @brief gets the mask for the hand
*
* @param input the input matrix to get the skin mask from
* @returns the skin mask in a new matrix
*/
Mat getSkinMask(Mat input);
private:
// thresholds for hsv calculation
int hLowThreshold = 0;
int hHighThreshold = 0;
int sLowThreshold = 0;
int sHighThreshold = 0;
int vLowThreshold = 0;
int vHighThreshold = 0;
// wether or not the skindetector has calibrated yet.
bool calibrated = false;
// rectangles that get drawn to show where the skin color will be sampled
Rect skinColorSamplerRectangle1, skinColorSamplerRectangle2;
/*
* @brief calculates the skin tresholds for the given samples
*
* @param sample1 the first sample
* @param sample2 the second sample
*/
void calculateThresholds(Mat sample1, Mat sample2);
std::vector<int> calculateAndReturnTresholds(Mat sample1, Mat sample2);
/**
* @brief the opening. it generates the structuring element and performs the morphological transformations required to detect the hand.
* This needs to be done to get the skin mask.
*
* @param binaryImage the matrix to perform the opening on. This needs to be a binary image, so consisting of only 1's and 0's.
* @param structuralElementShape the shape to use for the kernel that is used with generating the structuring element
* @param structuralElementSize the size of the kernel that will be used with generating the structuring element.
*/
void performOpening(Mat binaryImage, int structuralElementShape, Point structuralElementSize);
};
}

View File

@@ -0,0 +1,12 @@
#pragma once
#include <opencv2/videoio.hpp>
namespace static_camera
{
static cv::VideoCapture getCap()
{
static cv::VideoCapture cap(0);
return cap;
}
};

View File

@@ -0,0 +1,46 @@
#include <iostream>
#include "async_arm_detection.h"
#include "../OpenPoseVideo.h"
#include <thread>
#include "StaticCameraInstance.h"
namespace computervision
{
AsyncArmDetection::AsyncArmDetection()
{
}
void AsyncArmDetection::run_arm_detection(std::function<void(std::vector<Point>, cv::Mat poinst_on_image)> points_ready_func, OpenPoseVideo op)
{
VideoCapture cap = static_camera::getCap();
std::cout << "STARTING THREAD LAMBDA" << std::endl;
/*cv::VideoCapture cap = static_camera::GetCap();*/
if (!cap.isOpened())
{
std::cout << "capture was closed, opening..." << std::endl;
cap.open(0);
}
while (true)
{
Mat img;
cap.read(img);
op.movementSkeleton(img, points_ready_func);
}
}
void AsyncArmDetection::start(std::function<void(std::vector<Point>, cv::Mat poinst_on_image)> points_ready_func, OpenPoseVideo op)
{
std::cout << "starting function" << std::endl;
std::thread async_arm_detect_thread(&AsyncArmDetection::run_arm_detection,this, points_ready_func, op);
async_arm_detect_thread.detach(); // makes sure the thread is detached from the variable.
}
}

View File

@@ -0,0 +1,23 @@
#pragma once
#include <vector>
#include <opencv2/core/types.hpp>
#include <opencv2/videoio.hpp>
#include <functional>
#include "../OpenPoseVideo.h"
#include "StaticCameraInstance.h"
namespace computervision
{
class AsyncArmDetection
{
public:
AsyncArmDetection(void);
void start(std::function<void(std::vector<cv::Point>, cv::Mat poinst_on_image)>, computervision::OpenPoseVideo op);
private:
void run_arm_detection(std::function<void(std::vector<Point>, cv::Mat poinst_on_image)> points_ready_func, OpenPoseVideo op);
};
}

View File

@@ -0,0 +1,92 @@
#include "HandCalibrator.h"
#include <iostream>
#define MIN_MENU_HAND_SIZE 10000
#define MIN_GAME_HAND_SIZE 3000 // todo change
namespace computervision
{
namespace handcalibration
{
HandCalibrator::HandCalibrator()
{
}
void HandCalibrator::DrawHandCalibrationText(cv::Mat& output_frame)
{
cv::rectangle(output_frame, cv::Rect(0, 0, output_frame.cols, 40), cv::Scalar(0, 0, 0), -1);
cv::putText(output_frame, "Hand calibration", cv::Point(output_frame.cols / 2 - 100, 25), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(18, 219, 65), 2);
cv::putText(output_frame, "press 'b' to calibrate background,then press 's' to calibrate skin tone", cv::Point(5, 35), cv::FONT_HERSHEY_PLAIN, 1.0, cv::Scalar(18, 219, 65), 1);
cv::rectangle(output_frame, cv::Rect(0, output_frame.rows - 80, 450, output_frame.cols), cv::Scalar(0, 0, 0), -1);
cv::putText(output_frame, "hand in frame:", cv::Point(5, output_frame.rows - 50), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(255, 255, 0), 1);
cv::rectangle(output_frame, cv::Rect(420, output_frame.rows - 67, 15, 15), hand_present ? cv::Scalar(0, 255, 0) : cv::Scalar(0, 0, 255), -1);
DrawBackgroundSkinCalibrated(output_frame);
if (hand_present)
{
std::string hand_text = fingers_amount > 0 ? "open" : "closed";
cv::putText(output_frame, hand_text, cv::Point(10, 75), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(255, 0, 255), 3);
}
}
void HandCalibrator::DrawBackgroundSkinCalibrated(cv::Mat& output_frame)
{
cv::putText(output_frame, "background calibrated:", cv::Point(5, output_frame.rows - 30), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(255, 255, 0), 1);
cv::rectangle(output_frame, cv::Rect(420, output_frame.rows - 47, 15, 15), background_calibrated ? cv::Scalar(0, 255, 0) : cv::Scalar(0, 0, 255), -1);
cv::putText(output_frame, "skin color calibrated:", cv::Point(5, output_frame.rows - 10), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(255, 255, 0), 1);
cv::rectangle(output_frame, cv::Rect(420, output_frame.rows - 27, 15, 15), skintone_calibrated ? cv::Scalar(0, 255, 0) : cv::Scalar(0, 0, 255), -1);
}
void HandCalibrator::SetSkinCalibration(bool val)
{
skintone_calibrated = val;
}
void HandCalibrator::SetBackGroundCalibrated(bool val)
{
background_calibrated = val;
}
void HandCalibrator::SetHandPresent(bool val)
{
hand_present = val;
}
void HandCalibrator::SetAmountOfFingers(int amount)
{
fingers_amount = amount;
}
bool HandCalibrator::CheckIfHandPresent(cv::Mat input_image, HandDetectionType type)
{
std::vector<std::vector<cv::Point>> points;
cv::findContours(input_image, points, cv::RetrievalModes::RETR_LIST, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
if (points.size() == 0) return false;
for (int p = 0; p < points.size(); p++)
{
int area = cv::contourArea(points[p]);
if (type == handcalibration::HandDetectionType::MENU)
if (area > MIN_MENU_HAND_SIZE) return true;
if (type == handcalibration::HandDetectionType::GAME)
if (area > MIN_GAME_HAND_SIZE) return true;
}
return false;
}
}
}

View File

@@ -0,0 +1,76 @@
#pragma once
#include <opencv2/core/base.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
namespace computervision
{
namespace handcalibration
{
enum class HandDetectionType
{
MENU,
GAME
};
class HandCalibrator
{
public:
HandCalibrator();
/**
* @brief draws the text to show the status of the calibration on the image
*
* @param output_frame the frame to draw on.
*/
void DrawHandCalibrationText(cv::Mat& output_frame);
/**
* @brief sets the skin calibration variable.
*
* @param val the value to set
*/
void SetSkinCalibration(bool val);
/**
* @brief sets the background calibration variable.
*
* @param val the value to set
*/
void SetBackGroundCalibrated(bool val);
/**
* @brief sets the value for if the hand is present.
*
* @param val the value to set.
*/
void SetHandPresent(bool val);
/**
* @brief checks if the hand is present in the given image
*
* @param input_image the input image to check.
*/
bool CheckIfHandPresent(cv::Mat input_image, HandDetectionType type);
/**
* @brief sets the amount of fingers that are currently detected.
*
* @param amount the amount of fingers.
*/
void SetAmountOfFingers(int amount);
void DrawBackgroundSkinCalibrated(cv::Mat& output_frame);
private:
bool background_calibrated;
bool skintone_calibrated;
bool hand_present;
int fingers_amount;
};
}
}

View File

@@ -9,24 +9,42 @@ namespace entities
void Camera::Move(GLFWwindow* window)
{
float movement_speed = 0;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
{
position.z -= SPEED;
movement_speed -= SPEED;
}
if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
{
position.z += SPEED;
movement_speed += SPEED;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
{
position.x += SPEED;
rotation.y += ROT_SPEED;
}
if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
{
position.x -= SPEED;
rotation.y -= ROT_SPEED;
}
if (glfwGetKey(window, GLFW_KEY_SPACE) == GLFW_PRESS)
{
rotation.x -= ROT_SPEED;
}
if (glfwGetKey(window, GLFW_KEY_LEFT_SHIFT) == GLFW_PRESS)
{
rotation.x += ROT_SPEED;
}
float dx = glm::cos(glm::radians(rotation.y + 90)) * movement_speed;
float dz = glm::sin(glm::radians(rotation.y + 90)) * movement_speed;
position.x += dx;
position.z += dz;
}
}

View File

@@ -1,14 +1,20 @@
#pragma once
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/gtc/matrix_transform.hpp>
namespace entities
{
/*
* This class represents the viewport of the game. The whole game is seen through this class
*/
class Camera
{
private:
const float SPEED = 0.02f;
// The movement speed of the camera
const float SPEED = 0.52f;
const float ROT_SPEED = 1.0f;
glm::vec3 position;
glm::vec3 rotation;
@@ -16,6 +22,11 @@ namespace entities
public:
Camera(const ::glm::vec3& position, const ::glm::vec3& rotation);
/*
* @brief: This funtion moves the camera's position from the inputs of the keyboard
*
* @param window: The OpenGL window
*/
void Move(GLFWwindow* window);
inline glm::vec3 GetPosition() const{ return position; }

View File

@@ -5,10 +5,15 @@
namespace entities
{
/*
* This class represents a movable model in the game
*/
class Entity
{
private:
protected:
models::TexturedModel model;
glm::vec3 position;
glm::vec3 rotation;
float scale;
@@ -16,7 +21,18 @@ namespace entities
public:
Entity(const models::TexturedModel& model, const glm::vec3& position, const glm::vec3& rotation, float scale);
/*
* @brief: This function increases the position of the entity
*
* @param distance: The amount of distance in each axis the entity needs to move
*/
void IncreasePosition(const glm::vec3& distance);
/*
* @brief: This function increases the rotation of the entity
*
* @param rotation: The angle of each axis the entity needs to rotate
*/
void IncreaseRotation(const glm::vec3& rotation);
inline models::TexturedModel GetModel() const{return model;}

View File

@@ -0,0 +1,44 @@
#include "collision_entity.h"
namespace entities
{
CollisionEntity::CollisionEntity(const models::TexturedModel& model, const glm::vec3& position,
const glm::vec3& rotation, float scale, const collision::Box& bounding_box)
: Entity(model, position, rotation, scale),
bounding_box(bounding_box)
{
MoveCollisionBox();
}
void CollisionEntity::OnCollide(const collision::Collision& collision)
{
if (on_collide != nullptr)
{
on_collide(collision);
}
}
bool CollisionEntity::IsColliding(const glm::vec3& point) const
{
return (point.x >= min_xyz.x && point.x <= max_xyz.x) &&
(point.y >= min_xyz.y && point.y <= max_xyz.y) &&
(point.z >= min_xyz.z && point.z <= max_xyz.z);
}
bool CollisionEntity::IsColliding(const CollisionEntity& e) const
{
return (min_xyz.x <= e.max_xyz.x && max_xyz.x >= e.min_xyz.x) &&
(min_xyz.y <= e.max_xyz.y && max_xyz.y >= e.min_xyz.y) &&
(min_xyz.z <= e.max_xyz.z && max_xyz.z >= e.min_xyz.z);
}
void CollisionEntity::MoveCollisionBox()
{
bounding_box.center_pos = position;
const glm::vec3 size = bounding_box.size;
min_xyz = bounding_box.center_pos;
max_xyz = glm::vec3(min_xyz.x + size.x, min_xyz.y + size.y, min_xyz.z + size.z);
}
}

View File

@@ -0,0 +1,65 @@
#pragma once
#include "entity.h"
#include "../collision/collision.h"
namespace entities
{
/*
* This class is an entity with a collision box
*/
class CollisionEntity : public Entity
{
public:
collision::Box bounding_box;
glm::vec3 min_xyz;
glm::vec3 max_xyz;
void (*on_collide)(const collision::Collision& collision);
public:
CollisionEntity(const models::TexturedModel& model, const glm::vec3& position, const glm::vec3& rotation,
float scale, const collision::Box& bounding_box);
/*
* @brief: A function to do some sort of behaviour when the entity collides
*
* @param collision: The collision
*/
virtual void OnCollide(const collision::Collision& collision);
/*
* @brief: A function to check if the entity is colliding with a point in 3D space
*
* @param point: The point to check if its colliding with the entity
*
* @return: True is the entity is colliding, false if not
*/
bool IsColliding(const glm::vec3& point) const;
/*
* @brief: A function to check if the entity is colliding with another entity
*
* @param e: The other entity to check if its colliding with this
*
* @return: True is the entity is colliding, false if not
*/
bool IsColliding(const CollisionEntity& e) const;
/*
* @brief: A function to set the collision behaviour of the entity
*
* @param function: A function pointer to a function with the collision behaviour
*/
void SetCollisionBehaviour(void (*function)(const collision::Collision& collision))
{ if (function != nullptr) { on_collide = function; } }
protected:
/*
* @brief: This method moves the collision to the center of the entity
*/
void MoveCollisionBox();
};
}

30
src/entities/light.h Normal file
View File

@@ -0,0 +1,30 @@
#pragma once
#include <glm/vec3.hpp>
namespace entities
{
/*
* This class represents a light in the game
*/
class Light
{
private:
glm::vec3 position;
glm::vec3 color;
glm::vec3 attenuation = { 1, 0, 0 };
public:
Light(const glm::vec3& position, const glm::vec3& color) : position(position), color(color) { }
Light(const glm::vec3& position, const glm::vec3& color, const glm::vec3& attenuation)
: position(position), color(color), attenuation(attenuation) { }
glm::vec3 GetPosition() const { return position; }
void setPosition(const glm::vec3& position) { this->position = position; }
glm::vec3 GetColor() const { return color; }
void setColor(const glm::vec3& color) { this->color = color; }
glm::vec3 GetAttenuation() const { return attenuation; }
void SetAttenuation(const glm::vec3& attenuation) { this->attenuation = attenuation; }
};
}

26
src/gui/gui_element.h Normal file
View File

@@ -0,0 +1,26 @@
#pragma once
#include <glm/gtc/matrix_transform.hpp>
#include "../toolbox/toolbox.h"
namespace gui
{
/*
* Structure for representing a gui item to display on the screen
*
* texture = The texture for the gui
* position = The center position of the gui
* scale = The size (scale) of the gui
*/
struct GuiTexture
{
int texture;
glm::vec2 position;
glm::vec2 scale;
GuiTexture(int texture, glm::vec2 position, glm::vec2 scale): texture(texture), position(position), scale(scale)
{
scale.x /= (WINDOW_WIDTH / WINDOW_HEIGT);
}
};
}

View File

@@ -0,0 +1,83 @@
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include "gui_interactable.h"
namespace gui
{
InteractableGui::InteractableGui(int default_texture, glm::vec2 position, glm::vec2 scale)
: GuiTexture(default_texture, position, scale)
{
this->default_texture = default_texture;
minXY = glm::vec2(position.x - scale.x, position.y - scale.y);
maxXY = glm::vec2(position.x + scale.x, position.y + scale.y);
}
void InteractableGui::Update(GLFWwindow* window)
{
if (IsHoveringAbove(window) && glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_LEFT) == GLFW_PRESS)
{
if (clicked_texture != 0)
{
texture = clicked_texture;
}
else
{
texture = default_texture;
}
if (!is_clicking)
{
OnClick();
is_clicking = true;
}
}
else
{
if (is_clicking)
{
is_clicking = false;
}
}
}
bool InteractableGui::IsHoveringAbove(GLFWwindow* window)
{
double x_pos, y_pos;
glfwGetCursorPos(window, &x_pos, &y_pos);
const float x_rel = (x_pos / SCALED_WIDTH / DEFAULT_WIDTH) * 2.0f - 1.0f;
const float y_rel = -((y_pos / SCALED_HEIGHT / DEFAULT_HEIGHT) * 2.0f - 1.0f);
if (x_rel >= minXY.x && x_rel <= maxXY.x &&
y_rel >= minXY.y && y_rel <= maxXY.y)
{
if (hover_texture != 0)
{
texture = hover_texture;
}
else
{
texture = default_texture;
}
if (!is_hovering)
{
OnEnter();
is_hovering = true;
}
return true;
}
texture = default_texture;
if (is_hovering)
{
OnExit();
is_hovering = false;
}
return false;
}
}

112
src/gui/gui_interactable.h Normal file
View File

@@ -0,0 +1,112 @@
#pragma once
#include <glm/gtc/matrix_transform.hpp>
#include "../toolbox/toolbox.h"
#include "gui_element.h"
namespace gui
{
/*
* This class represents a gui item which can be interacted with
*/
class InteractableGui : public GuiTexture
{
private:
int default_texture;
int clicked_texture = 0;
int hover_texture = 0;
bool is_hovering = false;
bool is_clicking = false;
glm::vec2 minXY;
glm::vec2 maxXY;
public:
InteractableGui(int default_texture, glm::vec2 position, glm::vec2 scale);
/*
* @brief: Call this function every frame
*
* @param window: An openGL window
*/
void Update(GLFWwindow* window);
/*
* @brief: This function gets called when the InteractabeGui is clicked
*/
virtual void OnClick() = 0;
/*
* @brief: This function gets called when the mouse starts hovering above the InteractableGUI
*/
virtual void OnEnter() = 0;
/*
* @brief: This function gets called when the mouse stops hovering above the InteractableGUI
*/
virtual void OnExit() = 0;
/*
* @brief: This function sets the texture of the InteractableGUI for when the InteractableGUI is clicked
*/
void SetClickedTexture(int texture) { clicked_texture = texture; }
/*
* @brief: This function sets the texture of the InteractableGUI for when the mouse is hovering above the InteractableGUI
*/
void SetHoverTexture(int texture) { hover_texture = texture; }
private:
/*
* @brief: This function checks if the mouse is hovering above the InteractableGUI
*
* @param window: An openGL window
*
* @return: True or false
*/
bool IsHoveringAbove(GLFWwindow* window);
};
/*
* This class represents a button
*/
class Button : public InteractableGui
{
private:
void (*on_click_action)();
void (*on_enter_action)();
void (*on_exit_action)();
public:
Button(int default_texture, glm::vec2 position, glm::vec2 scale) : InteractableGui(default_texture, position, scale) {}
/*
* @brief: This function sets an action (function pointer) to the OnClick function
*
* @param fun: A function pointer to a function (or lambda)
*/
void SetOnClickAction(void (*fun)()) { on_click_action = fun; }
/*
* @brief: This function sets an action (function pointer) to the OnEnter function
*
* @param fun: A function pointer to a function (or lambda)
*/
void SetOnEnterAction(void (*fun)()) { on_enter_action = fun; }
/*
* @brief: This function sets an action (function pointer) to the OnExit function
*
* @param fun: A function pointer to a function (or lambda)
*/
void SetOnExitAction(void (*fun)()) { on_exit_action = fun; }
protected:
void OnClick() override { if (on_click_action != nullptr) on_click_action(); }
void OnEnter() override { if (on_enter_action != nullptr) on_enter_action(); }
void OnExit() override { if (on_exit_action != nullptr) on_exit_action(); }
};
}

View File

@@ -1,18 +1,35 @@
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/gtc/matrix_transform.hpp>
#include <functional>
#include <vector>
#define STB_IMAGE_IMPLEMENTATION
#include <iostream>
#include <map>
#include "stb_image.h"
#include <ostream>
#include <opencv2/core.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/video.hpp>
#include "gui/gui_interactable.h"
#include "models/model.h"
#include "renderEngine/loader.h"
#include "renderEngine/obj_loader.h"
#include "renderEngine/renderer.h"
#include "shaders/static_shader.h"
#include "shaders/entity_shader.h"
#include "toolbox/toolbox.h"
#include "scenes/scene.h"
#include "scenes/in_Game_Scene.h"
#include "scenes/startup_Scene.h"
#include "computervision/ObjectDetection.h"
//#include "computervision/OpenPoseImage.h"
#include "computervision/OpenPoseVideo.h"
#include "computervision/async/async_arm_detection.h"
#pragma comment(lib, "glfw3.lib")
#pragma comment(lib, "glew32s.lib")
@@ -20,84 +37,90 @@
static double UpdateDelta();
static GLFWwindow* window;
scene::Scene* current_scene;
static GLFWwindow* window;
bool points_img_available = false;
cv::Mat points_img;
void retrieve_points(std::vector<Point> arm_points, cv::Mat points_on_image)
{
std::cout << "got points!!" << std::endl;
std::cout << "points: " << arm_points << std::endl;
points_img = points_on_image;
points_img_available = true;
}
int main(void)
{
#pragma region OPENGL_SETTINGS
if (!glfwInit())
throw "Could not inditialize glwf";
window = glfwCreateWindow(WINDOW_WIDTH, WINDOW_HEIGT, "SDBA", NULL, NULL);
if (!window)
{
glfwTerminate();
throw "Could not initialize glwf";
}
glfwMakeContextCurrent(window);
glewInit();
glGetError();
#pragma endregion
#pragma region OPENGL_SETTINGS
if (!glfwInit())
throw "Could not inditialize glwf";
window = glfwCreateWindow(WINDOW_WIDTH, WINDOW_HEIGT, "SDBA", NULL, NULL);
if (!window)
{
glfwTerminate();
throw "Could not initialize glwf";
}
glfwMakeContextCurrent(window);
glewInit();
glGetError();
#pragma endregion
current_scene = new scene::Startup_Scene();
glfwSetKeyCallback(window, [](GLFWwindow* window, int key, int scancode, int action, int mods)
{
if (key == GLFW_KEY_ESCAPE)
glfwSetWindowShouldClose(window, true);
});
models::RawModel raw_model = LoadObjModel("res/Tree.obj");
models::ModelTexture texture = { render_engine::loader::LoadTexture("res/TreeTexture.png") };
models::TexturedModel model = { raw_model, texture };
entities::Entity entity(model, glm::vec3(0, -5, -20), glm::vec3(0, 0, 0), 1);
shaders::StaticShader shader;
shader.Init();
render_engine::renderer::Init(shader);
entities::Camera camera(glm::vec3(0, 0, 0), glm::vec3(0, 0, 0));
computervision::ObjectDetection objDetect;
{
if (key == GLFW_KEY_ESCAPE)
{
glfwSetWindowShouldClose(window, true);
}
current_scene->onKey(window, key, scancode, action, mods);
});
bool window_open = true;
// Main game loop
while (!glfwWindowShouldClose(window))
while (!glfwWindowShouldClose(window) && window_open)
{
// Update
//Update
const double delta = UpdateDelta();
entity.IncreaseRotation(glm::vec3(0, 1, 0));
camera.Move(window);
// Render
render_engine::renderer::Prepare();
shader.Start();
shader.LoadViewMatrix(camera);
scene::Scenes return_value = current_scene->start(window);
delete current_scene;
render_engine::renderer::Render(entity, shader);
switch (return_value) {
case scene::Scenes::STOP:
window_open = false;
break;
objDetect.calculateDifference();
case scene::Scenes::STARTUP:
current_scene = new scene::Startup_Scene();
break;
// Finish up
shader.Stop();
glfwSwapBuffers(window);
glfwPollEvents();
case scene::Scenes::INGAME:
current_scene = new scene::In_Game_Scene();
break;
default:
std::cout << "Wrong return value!!! ->" << std::endl;
break;
}
}
// Clean up
shader.CleanUp();
render_engine::loader::CleanUp();
// Clean up -> preventing memory leaks!!!
std::cout << "ending..." << std::endl;
glfwTerminate();
return 0;
return 0;
}
static double UpdateDelta()
{
double current_time = glfwGetTime();
static double last_frame_time = current_time;
double delt_time = current_time - last_frame_time;
last_frame_time = current_time;
return delt_time;
double current_time = glfwGetTime();
static double last_frame_time = current_time;
double delt_time = current_time - last_frame_time;
last_frame_time = current_time;
return delt_time;
}

View File

@@ -5,24 +5,31 @@
namespace models
{
/*
Structure for storing a vboID and vertex_count.
Structure for storing a vboID and vertex_count (this representa a mesh without a model).
This structure represents a Bare bones Model (A mesh without a texture).
The vao_id, points to an ID stored by openGL and the
vertex_count is how many triangles in the mesh there are.
vao_id = The openGL id of the model
vertex_count = The amount of vertices in the model
model_size = The size on each axis of the model
*/
struct RawModel
{
GLuint vao_id;
int vertex_count;
glm::vec3 model_size = { -1, -1, -1 };
};
/*
Structure for storing a texture (texture_id) to apply to a RawModel.
texture_id = The openGL id of the textures
shine_damper = A damper for the angle the model needs to be look at to see reflections
reflectivity = The amount of light the model reflects
*/
struct ModelTexture
{
GLuint texture_id;
float shine_damper = 1;
float reflectivity = 0;
};
/*

View File

@@ -1,7 +1,10 @@
#include <GL/glew.h>
#include <glm/vec3.hpp>
#include "../stb_image.h"
#include "loader.h"
#include <iostream>
namespace render_engine
{
namespace loader
@@ -9,22 +12,38 @@ namespace render_engine
static GLuint CreateVao();
static void StoreDataInAttributeList(int attribute_number, int coordinate_size, std::vector<float>& data);
static void BindIndicesBuffer(std::vector<unsigned int>& indices);
static glm::vec3 GetSizeModel(std::vector<float>& positions);
static std::vector<GLuint> vaos;
static std::vector<GLuint> vbos;
static std::vector<GLuint> textures;
/*
This function will generate a Model from vertex positions, textureCoordinates and indices.
This function will generate a Model from vertex positions, textureCoordinates normals and indices.
*/
struct models::RawModel LoadToVAO(std::vector<float>& positions, std::vector<float>& texture_coords, std::vector<unsigned int>& indices)
models::RawModel LoadToVAO(std::vector<float>& positions, std::vector<float>& texture_coords, std::vector<float>& normals, std::vector<unsigned int>& indices)
{
GLuint vao_id = CreateVao();
const GLuint vao_id = CreateVao();
BindIndicesBuffer(indices);
StoreDataInAttributeList(0, 3, positions);
StoreDataInAttributeList(1, 2, texture_coords);
StoreDataInAttributeList(2, 3, normals);
glBindVertexArray(0);
return { vao_id, static_cast<int>(indices.size()) };
const glm::vec3 model_size = GetSizeModel(positions);
return { vao_id, static_cast<int>(indices.size()), model_size };
}
/*
This function will generate a Model from vertex positions.
*/
models::RawModel LoadToVAO(std::vector<float>& positions)
{
const GLuint vao_id = CreateVao();
StoreDataInAttributeList(0, 2, positions);
glBindVertexArray(0);
return { vao_id, static_cast<int>(positions.size()) / 2 };
}
/*
@@ -40,6 +59,12 @@ namespace render_engine
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imgData);
// Set mipmapping with a constant LOD
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, -0.4f);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
@@ -113,5 +138,72 @@ namespace render_engine
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo_id);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int) * indices.size(), &indices[0], GL_STATIC_DRAW);
}
/**
* @brief gets the width, height and depth of a model
* @param positions all the points of a model
* @returns vec3<float> the size values of a model (width, height and depth)
**/
static glm::vec3 GetSizeModel(std::vector<float>& positions)
{
float minX = 100;
float maxX = -100;
float minY = 100;
float maxY = -100;
float minZ = 100;
float maxZ = -100;
for (int i = 0; i < positions.size(); ++i)
{
const int index = i % 3;
const float value = positions[i];
switch (index)
{
case 0: // x
{
if (value < minX)
{
minX = value;
} else if (value > maxX)
{
maxX = value;
}
break;
}
case 1: // y
{
if (value < minY)
{
minY = value;
}
else if (value > maxY)
{
maxY = value;
}
break;
}
case 2: // z
{
if (value < minZ)
{
minZ = value;
}
else if (value > maxZ)
{
maxZ = value;
}
break;
}
}
}
const float sizeX = maxX - minX;
const float sizeY = maxY - minY;
const float sizeZ = maxZ - minZ;
return { sizeX, sizeY, sizeZ };
}
}
}

View File

@@ -9,17 +9,36 @@ namespace render_engine
namespace loader
{
/*
This function generates a model from model data.
* @brief: This function generates a model from model data.
*
* @param position: The positions of each vertex (in order: x, y, z) in the model
* @param texture_coords: The texture coordinates of the model
* @param normals: The normals of each face of the model
* @param indices: A list with a sort of lookup table to the positions parameter
*
* @return: A new rawmodel which represents al the parameters in one struct
*/
struct models::RawModel LoadToVAO(std::vector<float>& positions, std::vector<float>& texture_coords, std::vector<unsigned int>& indices);
models::RawModel LoadToVAO(std::vector<float>& positions, std::vector<float>& texture_coords, std::vector<float>& normals, std::vector<unsigned int>& indices);
/*
Loads a texture from a file into openGL using stb_image.h
* @brief: Overloaded function of the function above, but does not need normals and indices.
* Use this function to for example load GUI items to OpenGL.
*
* @param position: The positions of each vertex (in order: x, y, z) in the model
*
* @return: A new rawmodel which represents al the parameters in one struct
*/
models::RawModel LoadToVAO(std::vector<float>& positions);
/*
* @brief: Loads a texture from a file into openGL using stb_image.h
*
* @param file_name: The filepath to the texture
*/
GLuint LoadTexture(std::string file_name);
/*
Call this function when cleaning up all the meshes (when exiting the program).
* @brief: Call this function when cleaning up all the meshes (when exiting the program).
*/
void CleanUp();
}

View File

@@ -1,8 +1,11 @@
#include <GL/glew.h>
#include <glm/gtc/matrix_transform.hpp>
#include "../models/model.h"
#include "renderer.h"
#include "loader.h"
#include "../toolbox/toolbox.h"
#include "renderer.h"
#include <iostream>
namespace render_engine
{
@@ -12,17 +15,27 @@ namespace render_engine
static const float NEAR_PLANE = 0.01f;
static const float FAR_PLANE = 1000.0f;
/*
This function will load the projectionMatrix into the shader
*/
void Init(shaders::StaticShader& shader)
// GUI variables
static models::RawModel quad;
void Init(shaders::EntityShader& shader)
{
// Faces which are not facing the camera are not rendered
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
const glm::mat4 projectionMatrix =
glm::perspective(glm::radians(FOV), (WINDOW_WIDTH / WINDOW_HEIGT), NEAR_PLANE, FAR_PLANE);
// Load the projectionmatrix into the shader
shader.Start();
shader.LoadProjectionMatrix(projectionMatrix);
shader.Stop();
// Initialize the quad for the GUI
std::vector<float> quad_positions = { -1, 1, -1, -1, 1, 1, 1, -1 };
quad = loader::LoadToVAO(quad_positions);
}
/*
@@ -32,36 +45,82 @@ namespace render_engine
{
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.3f, 0.4f, 0.6f, 1.0f);
glClearColor(SKY_COLOR.r, SKY_COLOR.g, SKY_COLOR.b, 1.0f);
}
/*
This function will Render a Model on the screen.
*/
void Render(entities::Entity& entity, shaders::StaticShader& shader)
void Render(entities::Entity& entity, shaders::EntityShader& shader)
{
const models::TexturedModel model = entity.GetModel();
const models::RawModel rawModel = model.raw_model;
const models::RawModel raw_model = model.raw_model;
const models::ModelTexture texture = model.texture;
// Enable the model
glBindVertexArray(rawModel.vao_id);
// Enable the model (VAO)
glBindVertexArray(raw_model.vao_id);
// Enable the inputs for the vertexShader
// Enable the VBO's from the model (VAO)
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
// Load the transformation of the model into the shader
const glm::mat4 modelMatrix = toolbox::CreateModelMatrix(entity.GetPosition(), entity.GetRotation(), entity.GetScale());
shader.LoadModelMatrix(modelMatrix);
shader.LoadShineVariables(texture.shine_damper, texture.reflectivity);
// Draw the model
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, model.texture.texture_id);
glDrawElements(GL_TRIANGLES, rawModel.vertex_count, GL_UNSIGNED_INT, 0);
glDrawElements(GL_TRIANGLES, raw_model.vertex_count, GL_UNSIGNED_INT, 0);
// Disable the VBO's and model (VAO)
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glBindVertexArray(0);
}
void Render(std::vector<gui::GuiTexture*>& guis, shaders::GuiShader& shader)
{
shader.Start();
// Enable the VAO and the positions VBO
glBindVertexArray(quad.vao_id);
glEnableVertexAttribArray(0);
// Enable alpha blending (for transparency in the texture)
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Disable depth testing to textures with transparency can overlap
glDisable(GL_DEPTH_TEST);
// Render each gui to the screen
for (gui::GuiTexture* gui : guis)
{
// Bind the texture of the gui to the shader
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gui->texture);
glm::mat4 matrix = toolbox::CreateModelMatrix(gui->position, gui->scale);
shader.LoadModelMatrix(matrix);
glDrawArrays(GL_TRIANGLE_STRIP, 0, quad.vertex_count);
}
// Enable depth test again
glEnable(GL_DEPTH_TEST);
// Disable alpha blending
glDisable(GL_BLEND);
// Disable the VBO and VAO
glDisableVertexAttribArray(0);
glBindVertexArray(0);
shader.Stop();
}
}
}

View File

@@ -1,25 +1,43 @@
#pragma once
#include "../gui/gui_element.h"
#include "../entities/entity.h"
#include "../shaders/static_shader.h"
#include "../shaders/entity_shader.h"
#include "../shaders/gui_shader.h"
namespace render_engine
{
namespace renderer
{
/*
Call this function when starting the program
*/
void Init(shaders::StaticShader& shader);
const glm::vec3 SKY_COLOR = { 0.3f, 0.4f, 0.6f };
/*
Call this function before rendering.
@brief: Call this function when starting the program
@param shader: The shader to render the entities with
*/
void Init(shaders::EntityShader& shader);
/*
@brief: Call this function before rendering.
This function will enable culling and load the projectionMatrix into the shader.
*/
void Prepare();
/*
Call this function when wanting to Render a mesh to the screen.
@brief: Call this function when wanting to Render a mesh to the screen.
@param entity: The entity which needs to be rendered
@param shader: The shader the entity needs to be rendered with
*/
void Render(entities::Entity& entity, shaders::StaticShader& shader);
void Render(entities::Entity& entity, shaders::EntityShader& shader);
/*
@brief: Call this function to render gui_textures on the screen
@param guis: A list with all the GUI textures you want to render
@param shade: The shader the GUI textures need to be rendered with
*/
void Render(std::vector<gui::GuiTexture*>& guis, shaders::GuiShader& shader);
}
}

View File

@@ -6,124 +6,128 @@
#include "loader.h"
#include "obj_loader.h"
static void Split(const std::string& s, char delim, std::vector<std::string>& elems)
namespace render_engine
{
std::stringstream ss;
ss.str(s);
std::string item;
while (getline(ss, item, delim)) {
elems.push_back(item);
}
}
static std::vector<std::string> Split(const std::string& s, char delim)
{
std::vector<std::string> elems;
Split(s, delim, elems);
return elems;
}
static void ProcessVertex(const std::vector<std::string>& vertex_data,
const std::vector<glm::vec3>& normals,
const std::vector<glm::vec2>& textures,
std::vector<GLuint>& indices,
std::vector<GLfloat>& texture_array,
std::vector<GLfloat>& normal_array)
{
GLuint current_vertex_pointer = std::stoi(vertex_data.at(0)) - 1;
indices.push_back(current_vertex_pointer);
glm::vec2 current_texture = textures.at(std::stoi(vertex_data.at(1)) - 1);
texture_array[(current_vertex_pointer * 2) % texture_array.size()] = current_texture.x;
texture_array[(current_vertex_pointer * 2 + 1) % texture_array.size()] = 1 - current_texture.y;
glm::vec3 current_norm = normals.at(std::stoi(vertex_data.at(2)) - 1);
normal_array[current_vertex_pointer * 3] = current_norm.x;
normal_array[current_vertex_pointer * 3 + 1] = current_norm.y;
normal_array[current_vertex_pointer * 3 + 2] = current_norm.z;
}
models::RawModel LoadObjModel(std::string file_name)
{
std::ifstream inFile (file_name);
if ( !inFile.is_open() )
static void Split(const std::string& s, char delim, std::vector<std::string>& elems)
{
throw std::runtime_error ( "Could not open model file " + file_name + ".obj!" );
std::stringstream ss;
ss.str(s);
std::string item;
while (getline(ss, item, delim)) {
elems.push_back(item);
}
}
std::vector<glm::vec3> vertices;
std::vector<glm::vec3> normals;
std::vector<glm::vec2> textures;
std::vector<GLuint> indices;
std::vector<GLfloat> vertex_array;
std::vector<GLfloat> normal_array;
std::vector<GLfloat> texture_array;
std::string line;
try
static std::vector<std::string> Split(const std::string& s, char delim)
{
while (std::getline(inFile, line))
std::vector<std::string> elems;
Split(s, delim, elems);
return elems;
}
static void ProcessVertex(const std::vector<std::string>& vertex_data,
const std::vector<glm::vec3>& normals,
const std::vector<glm::vec2>& textures,
std::vector<GLuint>& indices,
std::vector<GLfloat>& texture_array,
std::vector<GLfloat>& normal_array)
{
GLuint current_vertex_pointer = std::stoi(vertex_data.at(0)) - 1;
indices.push_back(current_vertex_pointer);
glm::vec2 current_texture = textures.at(std::stoi(vertex_data.at(1)) - 1);
texture_array[(current_vertex_pointer * 2) % texture_array.size()] = current_texture.x;
texture_array[(current_vertex_pointer * 2 + 1) % texture_array.size()] = 1 - current_texture.y;
glm::vec3 current_norm = normals.at(std::stoi(vertex_data.at(2)) - 1);
normal_array[current_vertex_pointer * 3] = current_norm.x;
normal_array[current_vertex_pointer * 3 + 1] = current_norm.y;
normal_array[current_vertex_pointer * 3 + 2] = current_norm.z;
}
models::RawModel LoadObjModel(std::string file_name)
{
std::ifstream inFile(file_name);
if (!inFile.is_open())
{
std::vector<std::string> split_line = Split(line, ' ');
if (split_line.at(0) == "v")
throw std::runtime_error("Could not open model file " + file_name + ".obj!");
}
std::vector<glm::vec3> vertices;
std::vector<glm::vec3> normals;
std::vector<glm::vec2> textures;
std::vector<GLuint> indices;
std::vector<GLfloat> vertex_array;
std::vector<GLfloat> normal_array;
std::vector<GLfloat> texture_array;
std::string line;
try
{
while (std::getline(inFile, line))
{
glm::vec3 vertex;
vertex.x = std::stof(split_line.at(1));
vertex.y = std::stof(split_line.at(2));
vertex.z = std::stof(split_line.at(3));
vertices.push_back(vertex);
std::vector<std::string> split_line = Split(line, ' ');
if (split_line.at(0) == "v")
{
glm::vec3 vertex;
vertex.x = std::stof(split_line.at(1));
vertex.y = std::stof(split_line.at(2));
vertex.z = std::stof(split_line.at(3));
vertices.push_back(vertex);
}
else if (split_line.at(0) == "vt")
{
glm::vec2 texture;
texture.x = std::stof(split_line.at(1));
texture.y = std::stof(split_line.at(2));
textures.push_back(texture);
}
else if (split_line.at(0) == "vn")
{
glm::vec3 normal;
normal.x = std::stof(split_line.at(1));
normal.y = std::stof(split_line.at(2));
normal.z = std::stof(split_line.at(3));
normals.push_back(normal);
}
else if (split_line.at(0) == "f")
{
normal_array = std::vector<GLfloat>(vertices.size() * 3);
texture_array = std::vector<GLfloat>(textures.size() * 2);
break;
}
}
else if (split_line.at(0) == "vt")
while (true)
{
glm::vec2 texture;
texture.x = std::stof(split_line.at(1));
texture.y = std::stof(split_line.at(2));
textures.push_back(texture);
}
else if (split_line.at(0) == "vn")
{
glm::vec3 normal;
normal.x = std::stof(split_line.at(1));
normal.y = std::stof(split_line.at(2));
normal.z = std::stof(split_line.at(3));
normals.push_back(normal);
}
else if (split_line.at(0) == "f")
{
normal_array = std::vector<GLfloat>(vertices.size() * 3);
texture_array = std::vector<GLfloat>(textures.size() * 2);
break;
std::vector<std::string> split = Split(line, ' ');
std::vector<std::string> vertex1 = Split(split.at(1), '/');
std::vector<std::string> vertex2 = Split(split.at(2), '/');
std::vector<std::string> vertex3 = Split(split.at(3), '/');
ProcessVertex(vertex1, normals, textures, indices, texture_array, normal_array);
ProcessVertex(vertex2, normals, textures, indices, texture_array, normal_array);
ProcessVertex(vertex3, normals, textures, indices, texture_array, normal_array);
if (!std::getline(inFile, line))
{
break;
}
}
}
while (true)
catch (const std::exception& e)
{
std::vector<std::string> split = Split(line, ' ');
std::vector<std::string> vertex1 = Split(split.at(1), '/');
std::vector<std::string> vertex2 = Split(split.at(2), '/');
std::vector<std::string> vertex3 = Split(split.at(3), '/');
ProcessVertex(vertex1, normals, textures, indices, texture_array, normal_array);
ProcessVertex(vertex2, normals, textures, indices, texture_array, normal_array);
ProcessVertex(vertex3, normals, textures, indices, texture_array, normal_array);
if (!std::getline(inFile, line))
{
break;
}
// Always go in here
}
} catch (const std::exception& e)
{
// Always go in here
inFile.close();
vertex_array = std::vector<GLfloat>(vertices.size() * 3);
int p = 0;
for (auto& vertex : vertices)
{
vertex_array[p++] = vertex.x;
vertex_array[p++] = vertex.y;
vertex_array[p++] = vertex.z;
}
return render_engine::loader::LoadToVAO(vertex_array, texture_array, normal_array, indices);
}
inFile.close();
vertex_array = std::vector<GLfloat>( vertices.size() * 3 );
int p = 0;
for ( auto& vertex : vertices )
{
vertex_array[p++] = vertex.x;
vertex_array[p++] = vertex.y;
vertex_array[p++] = vertex.z;
}
return render_engine::loader::LoadToVAO( vertex_array, texture_array, indices);
}

View File

@@ -3,4 +3,12 @@
#include <string>
#include "../models/model.h"
models::RawModel LoadObjModel(std::string file_name);
namespace render_engine
{
/*
* @brief: This function retrieves an .obj file, loads it into the VBO and returns a RawModel
*
* @param file_name: The path to the .obj file
*/
models::RawModel LoadObjModel(std::string file_name);
}

View File

@@ -0,0 +1,162 @@
#include <iostream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include "in_Game_Scene.h"
#include "startup_Scene.h"
#include "../gui/gui_interactable.h"
#include "../models/model.h"
#include "../renderEngine/loader.h"
#include "../renderEngine/obj_loader.h"
#include "../renderEngine/renderer.h"
#include "../shaders/entity_shader.h"
#include "../toolbox/toolbox.h"
#include <opencv2/core/base.hpp>
#include "../computervision/HandDetectRegion.h"
#include "../computervision/ObjectDetection.h"
namespace scene
{
std::vector<entities::Entity> entities;
std::vector<entities::Light> lights;
models::RawModel raw_model;
models::ModelTexture texture;
shaders::EntityShader *shader;
shaders::GuiShader *gui_shader;
entities::Camera camera(glm::vec3(0, 0, 0), glm::vec3(0, 0, 0));
std::vector<gui::GuiTexture*> guis;
std::vector<computervision::HandDetectRegion> regions;
computervision::HandDetectRegion reg_left("left", 0, 0, 150, 150), reg_right("right", 0, 0, 150, 150), reg_up("up", 0, 0, 150, 150);
In_Game_Scene::In_Game_Scene()
{
shader = new shaders::EntityShader;
shader->Init();
render_engine::renderer::Init(*shader);
gui_shader = new shaders::GuiShader();
gui_shader->Init();
}
scene::Scenes scene::In_Game_Scene::start(GLFWwindow* window)
{
// set up squares according to size of camera input
cv::Mat camera_frame;
static_camera::getCap().read(camera_frame); // get camera frame to know the width and heigth
reg_left.SetXPos(10);
reg_left.SetYPos(camera_frame.rows / 2 - reg_left.GetHeight()/2);
reg_right.SetXPos(camera_frame.cols - 10 - reg_right.GetWidth());
reg_right.SetYPos(camera_frame.rows / 2 - reg_right.GetHeight()/2);
reg_up.SetXPos(camera_frame.cols / 2 - reg_up.GetWidth() / 2);
reg_up.SetYPos(10);
raw_model = render_engine::LoadObjModel("res/House.obj");
texture = { render_engine::loader::LoadTexture("res/Texture.png") };
texture.shine_damper = 10;
texture.reflectivity = 0;
models::TexturedModel model = { raw_model, texture };
int z = 0;
for (int i = 0; i < 5; ++i)
{
entities.push_back(entities::Entity(model, glm::vec3(0, -50, -50 - z), glm::vec3(0, 90, 0), 20));
z += (raw_model.model_size.x * 20);
}
lights.push_back(entities::Light(glm::vec3(0, 1000, -7000), glm::vec3(5, 5, 5)));
lights.push_back(entities::Light(glm::vec3(0, 0, -30), glm::vec3(2, 0, 2), glm::vec3(0.0001f, 0.0001f, 0.0001f)));
lights.push_back(entities::Light(glm::vec3(0, 0, -200), glm::vec3(0, 2, 0), glm::vec3(0.0001f, 0.0001f, 0.0001f)));
// GUI stuff
gui::Button button(render_engine::loader::LoadTexture("res/Mayo.png"), glm::vec2(0.5f, 0.0f), glm::vec2(0.25f, 0.25f));
button.SetHoverTexture(render_engine::loader::LoadTexture("res/Texture.png"));
button.SetClickedTexture(render_engine::loader::LoadTexture("res/Mayo.png"));
button.SetOnClickAction([]()
{
std::cout << "I got clicked on!" << std::endl;
});
guis.push_back(&button);
while (return_value == scene::Scenes::INGAME)
{
update(window);
button.Update(window);
render();
glfwSwapBuffers(window);
glfwPollEvents();
}
shader->CleanUp();
gui_shader->CleanUp();
render_engine::loader::CleanUp();
return return_value;
}
void scene::In_Game_Scene::render()
{
// Render
render_engine::renderer::Prepare();
shader->Start();
shader->LoadSkyColor(render_engine::renderer::SKY_COLOR);
shader->LoadLights(lights);
shader->LoadViewMatrix(camera);
// Renders each entity in the entities list
for (entities::Entity& entity : entities)
{
render_engine::renderer::Render(entity, *shader);
}
// Render GUI items
render_engine::renderer::Render(guis, *gui_shader);
// Stop rendering the entities
shader->Stop();
}
void scene::In_Game_Scene::update(GLFWwindow* window)
{
camera.Move(window);
update_hand_detection();
}
void scene::In_Game_Scene::onKey(GLFWwindow* window, int key, int scancode, int action, int mods)
{
if (glfwGetKey(window, GLFW_KEY_SPACE) == GLFW_PRESS)
{
cv::destroyWindow("camera");
return_value = scene::Scenes::STOP;
}
if (glfwGetKey(window, GLFW_KEY_B) == GLFW_PRESS)
{
reg_left.CalibrateBackground();
reg_right.CalibrateBackground();
reg_up.CalibrateBackground();
}
if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
{
std::vector<int> tresholds = reg_left.CalculateSkinTresholds();
reg_right.setSkinTresholds(tresholds);
reg_up.setSkinTresholds(tresholds);
}
}
void scene::In_Game_Scene::update_hand_detection()
{
cv::Mat camera_frame;
static_camera::getCap().read(camera_frame);
reg_left.DetectHand(camera_frame);
reg_right.DetectHand(camera_frame);
reg_up.DetectHand(camera_frame);
cv::imshow("camera", camera_frame);
}
}

View File

@@ -0,0 +1,23 @@
#pragma once
#include "scene.h"
namespace scene
{
class In_Game_Scene : public scene::Scene
{
private:
scene::Scenes return_value = scene::Scenes::INGAME;
void update_hand_detection();
public:
In_Game_Scene();
Scenes start(GLFWwindow* window) override;
void render() override;
void update(GLFWwindow* window) override;
void onKey(GLFWwindow* window, int key, int scancode, int action, int mods) override;
};
}

31
src/scenes/scene.h Normal file
View File

@@ -0,0 +1,31 @@
#pragma once
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <map>
namespace scene {
enum class Scenes
{
STARTUP,
INGAME,
GAMEOVER,
CALIBRATION,
STOP
};
class Scene
{
public:
virtual Scenes start(GLFWwindow* window) = 0;
virtual void render() = 0;
virtual void update(GLFWwindow* window) = 0;
virtual void onKey(GLFWwindow* window, int key, int scancode, int action, int mods) {};
};
}

View File

@@ -0,0 +1,45 @@
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <map>
#include "startup_Scene.h"
#include "../computervision/ObjectDetection.h"
#include "../computervision/HandDetectRegion.h"
#include <iostream>
namespace scene
{
computervision::ObjectDetection objDetect;
scene::Scenes scene::Startup_Scene::start(GLFWwindow *window)
{
while (return_value == scene::Scenes::STARTUP)
{
render();
update(window);
glfwSwapBuffers(window);
glfwPollEvents();
}
return return_value;
}
void scene::Startup_Scene::render()
{
}
void scene::Startup_Scene::update(GLFWwindow* window)
{
bool hand_present;
objDetect.DetectHand(objDetect.ReadCamera(),hand_present);
}
void scene::Startup_Scene::onKey(GLFWwindow* window, int key, int scancode, int action, int mods)
{
if (glfwGetKey(window, GLFW_KEY_SPACE) == GLFW_PRESS)
{
return_value = scene::Scenes::INGAME;
cv::destroyWindow("camera");
}
}
}

View File

@@ -0,0 +1,22 @@
#pragma once
#include "scene.h"
#include <map>
namespace scene
{
extern GLFWwindow* window;
class Startup_Scene : public scene::Scene
{
private:
scene::Scenes return_value = scene::Scenes::STARTUP;
public:
Scenes start(GLFWwindow* window) override;
void render() override;
void update(GLFWwindow* window) override;
void onKey(GLFWwindow* window, int key, int scancode, int action, int mods) override;
};
}

View File

@@ -0,0 +1,204 @@
#include "entity_shader.h"
#include "../toolbox/toolbox.h"
namespace shaders
{
static std::string vertex_shader = R"(
#version 400 core
// The VertexShader is run for each vertex on the screen.
// Position of the vertex
in vec3 position;
// Coordinates of the texture
in vec2 texture_coords;
// The normal of the vertex
in vec3 normal;
// Equal to the texture_coords
out vec2 pass_texture_coords;
out vec3 surface_normal;
out vec3 to_light_vector[4];
out vec3 to_camera_vector;
out float visibility;
uniform mat4 model_matrix;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform vec3 light_position[4];
const float density = 0.0017;
const float gradient = 4;
void main(void)
{
// Calculate the real position of the vertex (after rotation and scaling)
vec4 world_position = model_matrix * vec4(position, 1.0);
vec4 position_rel_to_cam = view_matrix * world_position;
// Tell OpenGL where to render the vertex
gl_Position = projection_matrix * position_rel_to_cam;
// Pass the textureCoords directly to the fragment shader
pass_texture_coords = texture_coords;
surface_normal = (model_matrix * vec4(normal, 0.0)).xyz;
for (int i = 0; i < 4; i++)
{
to_light_vector[i] = light_position[i] - world_position.xyz;
}
to_camera_vector = (inverse(view_matrix) * vec4(0.0, 0.0, 0.0, 1.0)).xyz - world_position.xyz;
// Calculate the density/visibility of the vertex with the fog
float distance = length(position_rel_to_cam.xyz);
visibility = exp(-pow((distance * density), gradient));
visibility = clamp(visibility, 0.0, 1.0);
}
)";
static std::string fragment_shader = R"(
#version 400 core
// The FragmentShader is run for each pixel in a face on the screen.
// Interpolated textureCoordinates of the vertex (relative to the distance to each vertex)
in vec2 pass_texture_coords;
in vec3 surface_normal;
in vec3 to_light_vector[4];
in vec3 to_camera_vector;
in float visibility;
// Final color of the pixel
out vec4 out_color;
// The texture of the model
uniform sampler2D model_texture;
uniform vec3 light_color[4];
uniform vec3 attenuation[4];
uniform float shine_damper;
uniform float reflectivity;
uniform vec3 sky_color;
const float min_diffuse_lighting = 0.1;
void main(void)
{
vec3 unit_normal = normalize(surface_normal);
vec3 unit_camera_vector = normalize(to_camera_vector);
vec3 total_diffuse = vec3(0.0);
vec3 total_specular = vec3(0.0);
for (int i = 0; i < 4; i++)
{
float distance = length(to_light_vector[i]);
float att_factor = attenuation[i].x + (attenuation[i].y * distance) + (attenuation[i].z * distance * distance);
vec3 unit_light_vector = normalize(to_light_vector[i]);
// Calculate the diffuse lighting
float dot_diffuse = dot(unit_normal, unit_light_vector);
float brightness = max(dot_diffuse, 0.0);
// Calculate the specular lighting
vec3 light_direction = -unit_light_vector;
vec3 reflected_light_direction = reflect(light_direction, unit_normal);
float dot_specular = dot(reflected_light_direction, unit_camera_vector);
dot_specular = max(dot_specular, 0.0);
float damped_specular = pow(dot_specular, shine_damper);
total_diffuse = total_diffuse + (brightness * light_color[i]) / att_factor;
total_specular = total_specular + (damped_specular * reflectivity * light_color[i]) / att_factor;
}
total_diffuse = max(total_diffuse, min_diffuse_lighting);
out_color = vec4(total_diffuse, 1.0) * texture(model_texture, pass_texture_coords) + vec4(total_specular, 1.0);
out_color = mix(vec4(sky_color, 1.0), out_color, visibility);
}
)";
EntityShader::EntityShader(): ShaderProgram(vertex_shader, fragment_shader)
{ }
void EntityShader::LoadModelMatrix(const glm::mat4& matrix) const
{
LoadMatrix(location_model_matrix, matrix);
}
void EntityShader::LoadProjectionMatrix(const glm::mat4& projection) const
{
LoadMatrix(location_projection_matrix, projection);
}
void EntityShader::LoadViewMatrix(entities::Camera& camera) const
{
const glm::mat4 view_matrix = toolbox::CreateViewMatrix(camera);
LoadMatrix(location_view_matrix, view_matrix);
}
void EntityShader::LoadLights(std::vector<entities::Light>& lights) const
{
for (int i = 0; i < MAX_LIGHTS; ++i)
{
if (i < lights.size())
{
LoadVector(location_light_position[i], lights[i].GetPosition());
LoadVector(location_light_color[i], lights[i].GetColor());
LoadVector(location_light_attenuation[i], lights[i].GetAttenuation());
} else
{
LoadVector(location_light_position[i], glm::vec3(0, 0, 0));
LoadVector(location_light_color[i], glm::vec3(0, 0, 0));
LoadVector(location_light_attenuation[i], glm::vec3(1, 0, 0));
}
}
}
void EntityShader::LoadShineVariables(float shine_damper, float reflectivity) const
{
LoadFloat(location_shine_damper, shine_damper);
LoadFloat(location_reflectivity, reflectivity);
}
void EntityShader::LoadSkyColor(glm::vec3 sky_color) const
{
LoadVector(location_sky_color, sky_color);
}
void EntityShader::SetAttributes() const
{
// Load the position VBO and textureCoords VBO from the VAO into the shader "in" variables
SetAttribute(0, "position");
SetAttribute(1, "texture_coords");
SetAttribute(2, "normal");
}
void EntityShader::GetAllUniformLocations()
{
// Get the locations from the uniform variables from the shaders
location_model_matrix = GetUniformLocation("model_matrix");
location_projection_matrix = GetUniformLocation("projection_matrix");
location_view_matrix = GetUniformLocation("view_matrix");
location_shine_damper = GetUniformLocation("shine_damper");
location_reflectivity = GetUniformLocation("reflectivity");
location_sky_color = GetUniformLocation("sky_color");
for (int i = 0; i < MAX_LIGHTS; ++i)
{
std::string light_pos = std::string("light_position[") + std::to_string(i) + "]";
location_light_position[i] = GetUniformLocation(light_pos.c_str());
std::string light_color = std::string("light_color[") + std::to_string(i) + "]";
location_light_color[i] = GetUniformLocation(light_color.c_str());
std::string light_attenuation = std::string("attenuation[") + std::to_string(i) + "]";
location_light_attenuation[i] = GetUniformLocation(light_attenuation.c_str());
}
}
}

View File

@@ -0,0 +1,80 @@
#pragma once
#include <glm/gtc/matrix_transform.hpp>
#include <vector>
#include "shader_program.h"
#include "../entities/camera.h"
#include "../entities/light.h"
/*
This class handles the shaders for the entities.
*/
namespace shaders
{
class EntityShader : public ShaderProgram
{
private:
const static int MAX_LIGHTS = 4;
GLuint location_model_matrix;
GLuint location_projection_matrix;
GLuint location_view_matrix;
GLuint location_light_position[MAX_LIGHTS];
GLuint location_light_color[MAX_LIGHTS];
GLuint location_light_attenuation[MAX_LIGHTS];
GLuint location_shine_damper;
GLuint location_reflectivity;
GLuint location_sky_color;
public:
EntityShader();
/*
* @brief: A method to load the model matrix into the shader
*
* @param matrix: The model matrix
*/
void LoadModelMatrix(const glm::mat4& matrix) const;
/*
* @brief: A method to load the projection matrix into the shader
*
* @param projection: The projection matrix
*/
void LoadProjectionMatrix(const glm::mat4& projection) const;
/*
* @brief: A method to load the view matrix (camera) into the shader
*
* @param camera: The camera which the scene needs to be rendered from
*/
void LoadViewMatrix(entities::Camera& camera) const;
/*
* @brief: A method to load some lights into the shader
*
* @param lights: The lights
*/
void LoadLights(std::vector<entities::Light>& lights) const;
/*
* @brief: A method to load the the shine variables from a model into the shader
*
* @param shine_damper: The dampening of the angle from when to render reflectivity on the vertex
* @param reflectivity: The amount the model reflects
*/
void LoadShineVariables(float shine_damper, float reflectivity) const;
/*
* @brief: A method to load the sky color into the shader. This color will be used for the fog
*
* @param sky_color: The color of the sky
*/
void LoadSkyColor(glm::vec3 sky_color) const;
protected:
void SetAttributes() const override;
void GetAllUniformLocations() override;
};
}

View File

@@ -0,0 +1,57 @@
#include "gui_shader.h"
namespace shaders
{
static std::string vertex_shader = R"(
#version 140
in vec2 position;
out vec2 texture_coords;
uniform mat4 model_matrix;
void main(void)
{
gl_Position = model_matrix * vec4(position, 0.0, 1.0);
// This makes top left corner coordinate (0, 0) and bottom right (1, 1)
texture_coords = vec2((position.x + 1.0) / 2.0, 1 - (position.y + 1.0) / 2.0);
}
)";
static std::string fragment_shader = R"(
#version 140
in vec2 texture_coords;
out vec4 out_color;
uniform sampler2D gui_texture;
void main(void)
{
out_color = texture(gui_texture, texture_coords);
}
)";
GuiShader::GuiShader() : ShaderProgram(vertex_shader, fragment_shader)
{ }
void GuiShader::LoadModelMatrix(const glm::mat4& matrix) const
{
LoadMatrix(location_model_matrix, matrix);
}
void GuiShader::SetAttributes() const
{
SetAttribute(0, "position");
}
void GuiShader::GetAllUniformLocations()
{
location_model_matrix = GetUniformLocation("model_matrix");
}
}

31
src/shaders/gui_shader.h Normal file
View File

@@ -0,0 +1,31 @@
#pragma once
#include <glm/gtc/matrix_transform.hpp>
#include "shader_program.h"
namespace shaders
{
/*
* This class handles the shaders for all the GUI items
*/
class GuiShader : public ShaderProgram
{
private:
GLuint location_model_matrix;
public:
GuiShader();
/*
* @brief: A method to load the model matrix into the shader
*
* @param matrix: The model matrix
*/
void LoadModelMatrix(const glm::mat4& matrix) const;
protected:
void SetAttributes() const override;
void GetAllUniformLocations() override;
};
}

View File

@@ -1,4 +1,5 @@
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <iostream>
#include <fstream>
#include <vector>

View File

@@ -21,29 +21,87 @@ namespace shaders
ShaderProgram(std::string& vertex_shader, std::string& fragment_shader);
virtual ~ShaderProgram() = default;
// Call this function after making the shaderprogram (sets all the attributes of the shader)
/*
* @brief: Call this function after making the shaderprogram (sets all the attributes of the shader)
*/
void Init();
// Call this function before rendering
/*
* @brief: Call this function before rendering
*/
void Start() const;
// Call this function after rendering
/*
* @brief: Call this function after rendering
*/
void Stop() const;
// Call this function when closing the application
/*
* @brief: Call this function when closing the application
*/
void CleanUp() const;
protected:
// Set the inputs of the vertex shader
/*
* @brief: Set the inputs of the vertex shader
*/
virtual void SetAttributes() const = 0;
/*
* @brief: Sets/binds a input variable (in) to a VBO from the model
*
* @param attribute: The id of the VBO
* @param variable_name: The name of the "in" variable in the shader
*/
void SetAttribute(const GLuint attribute, const char* variable_name) const;
// Loads value's (uniform variables) into the shader
/*
* @brief: This function loads a float value into a uniform variable into the shader
*
* @param location: The location of the variable in openGL
* @param value: The value which will be loaded into the variable
*/
void LoadFloat(GLuint location, GLfloat value) const;
/*
* @brief: This function loads a vector value into a uniform variable into the shader
*
* @param location: The location of the variable in openGL
* @param vector: The value which will be loaded into the variable
*/
void LoadVector(GLuint location, glm::vec3 vector) const;
/*
* @brief: This function loads a 4x4 matrix value into a uniform variable into the shader
*
* @param location: The location of the variable in openGL
* @param matrix: The value which will be loaded into the variable
*/
void LoadMatrix(GLuint location, glm::mat4 matrix) const;
/*
* @brief: This function will get all the locations of each uniform variable
*/
virtual void GetAllUniformLocations() = 0;
/*
* @brief: This function will retrieve the location of a uniform variable
*
* @param uniform_name: The name of the uniform variable
*
* @return: The location of the uniform variable
*/
GLuint GetUniformLocation(const GLchar* uniform_name) const;
private:
/*
* @brief: This function will load a shader into openGL
*
* @param shader_string: The shader as a string (the whole code)
* @param type: The type of the shader (Vertex/Fragment)
*
* @return: The id of the shader given by openGL
*/
GLuint LoadShader(const std::string& shader_string, GLuint type) const;
};
}

View File

@@ -1,87 +0,0 @@
#include "static_shader.h"
#include "../toolbox/toolbox.h"
namespace shaders
{
static std::string vertex_shader = R"(
#version 400 core
// The VertexShader is run for each vertex on the screen.
// Position of the vertex
in vec3 position;
// Coordinates of the texture
in vec2 texture_coords;
// Equal to the texture_coords
out vec2 pass_texture_coords;
uniform mat4 model_matrix;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
void main(void)
{
// Tell OpenGL where to render the vertex
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(position, 1.0);
// Pass the texture_coords directly to the fragment shader
pass_texture_coords = texture_coords;
}
)";
static std::string fragment_shader = R"(
#version 400 core
// The FragmentShader is run for each pixel in a face on the screen.
// Interpolated textureCoordinates of the vertex (relative to the distance to each vertex)
in vec2 pass_texture_coords;
// Final color of the pixel
out vec4 out_color;
// The texture of the model
uniform sampler2D texture_sampler;
void main(void)
{
out_color = texture(texture_sampler, pass_texture_coords);
}
)";
StaticShader::StaticShader(): ShaderProgram(vertex_shader, fragment_shader)
{
}
void StaticShader::LoadModelMatrix(const glm::mat4& matrix) const
{
LoadMatrix(location_model_matrix, matrix);
}
void StaticShader::LoadProjectionMatrix(const glm::mat4& projection) const
{
LoadMatrix(location_projection_matrix, projection);
}
void StaticShader::LoadViewMatrix(entities::Camera& camera) const
{
const glm::mat4 view_matrix = toolbox::CreateViewMatrix(camera);
LoadMatrix(location_view_matrix, view_matrix);
}
void StaticShader::SetAttributes() const
{
SetAttribute(0, "position");
SetAttribute(1, "texture_coords");
}
void StaticShader::GetAllUniformLocations()
{
location_model_matrix = GetUniformLocation("model_matrix");
location_projection_matrix = GetUniformLocation("projection_matrix");
location_view_matrix = GetUniformLocation("view_matrix");
}
}

View File

@@ -1,31 +0,0 @@
#pragma once
#include <glm/gtc/matrix_transform.hpp>
#include "shader_program.h"
#include "../entities/camera.h"
/*
This class does represents the shaders for the models.
*/
namespace shaders
{
class StaticShader : public ShaderProgram
{
private:
GLuint location_model_matrix;
GLuint location_projection_matrix;
GLuint location_view_matrix;
public:
StaticShader();
void LoadModelMatrix(const glm::mat4& matrix) const;
void LoadProjectionMatrix(const glm::mat4& projection) const;
void LoadViewMatrix(entities::Camera& camera) const;
protected:
void SetAttributes() const override;
void GetAllUniformLocations() override;
};
}

46
src/toolbox/Timer.h Normal file
View File

@@ -0,0 +1,46 @@
#pragma once
namespace toolbox
{
/*
* This class represents a timer which needs to be updated
* every frame to work correctly.
*/
class Timer
{
private:
float current_time;
float final_time;
bool has_finished;
public:
/*
* @brief: Constructor to make the timer
*
* @param final_time: The time which the timer needs to count to
*/
Timer(float final_time): current_time(0), final_time(final_time), has_finished(false) {}
/*
* @brief: Updates the timer. Call this method once every iteration in the game loop
*
* @param delta: The deltatime of the game
*/
void UpdateTimer(const double delta)
{
current_time += delta;
if (current_time >= final_time)
{
has_finished = true;
}
}
/*
* @brief: Returns if the timer has finished
*
* @return: True if the timer has finished
*/
bool HasFinished() const { return has_finished; }
};
}

View File

@@ -2,6 +2,14 @@
namespace toolbox
{
glm::mat4 CreateModelMatrix(glm::vec2 translation, glm::vec2 scale)
{
glm::mat4 matrix(1.0f);
matrix = glm::translate(matrix, glm::vec3(translation.x, translation.y, 0));
matrix = glm::scale(matrix, glm::vec3(scale.x, scale.y, 0));
return matrix;
}
glm::mat4 CreateModelMatrix(glm::vec3 translation, glm::vec3 rotation, float scale)
{
glm::mat4 matrix(1.0f);

View File

@@ -5,10 +5,45 @@
namespace toolbox
{
#define WINDOW_WIDTH 1400.0f
#define WINDOW_HEIGT 800.0f
// Window macro's
#define DEFAULT_WIDTH 1920
#define DEFAULT_HEIGHT 1080
// Change these macros to change the window size
#define WINDOW_WIDTH 1400.0f
#define WINDOW_HEIGT 800.0f
#define SCALED_WIDTH (WINDOW_WIDTH/DEFAULT_WIDTH)
#define SCALED_HEIGHT (WINDOW_HEIGT/DEFAULT_HEIGHT)
//
/*
* @brief: This function will create a model matrix
*
* @param translation: The position of the model
* @param scale: The scale of the model
*
* @return: The model matrix of the model
*/
glm::mat4 CreateModelMatrix(glm::vec2 translation, glm::vec2 scale);
/*
* @brief: This function will create a model matrix
*
* @param translation: The position of the model
* @param rotation: The rotation of the model
* @param scale: The scale of the model
*
* @return: The model matrix of the model
*/
glm::mat4 CreateModelMatrix(glm::vec3 translation, glm::vec3 rotation, float scale);
/*
* @brief: This function will create a view matrix from the camera's position
*
* @param camera: The camera the view matrix needs to be made from
*
* @return: The view matrix
*/
glm::mat4 CreateViewMatrix(entities::Camera& camera);
}

View File

@@ -19,29 +19,71 @@
</ProjectConfiguration>
</ItemGroup>
<ItemGroup>
<ClCompile Include="src\collision\collision_handler.cpp" />
<ClCompile Include="src\computervision\calibration\HandCalibrator.cpp" />
<ClCompile Include="src\computervision\HandDetectRegion.cpp" />
<ClCompile Include="src\scenes\in_Game_Scene.cpp" />
<ClCompile Include="src\computervision\async\async_arm_detection.cpp" />
<ClCompile Include="src\computervision\ObjectDetection.cpp" />
<ClCompile Include="src\computervision\OpenPoseVideo.cpp" />
<ClCompile Include="src\computervision\SkinDetector.cpp" />
<ClCompile Include="src\computervision\FingerCount.cpp" />
<ClCompile Include="src\computervision\BackgroundRemover.cpp" />
<ClCompile Include="src\entities\camera.cpp" />
<ClCompile Include="src\entities\collision_entity.cpp" />
<ClCompile Include="src\entities\entity.cpp" />
<ClCompile Include="src\gui\gui_interactable.cpp" />
<ClCompile Include="src\main.cpp" />
<ClCompile Include="src\renderEngine\loader.cpp" />
<ClCompile Include="src\renderEngine\obj_loader.cpp" />
<ClCompile Include="src\renderEngine\renderer.cpp" />
<ClCompile Include="src\shaders\gui_shader.cpp" />
<ClCompile Include="src\shaders\shader_program.cpp" />
<ClCompile Include="src\shaders\static_shader.cpp" />
<ClCompile Include="src\shaders\entity_shader.cpp" />
<ClCompile Include="src\toolbox\toolbox.cpp" />
<ClCompile Include="src\scenes\startup_Scene.cpp" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="src\collision\collision.h" />
<ClInclude Include="src\collision\collision_handler.h" />
<ClInclude Include="src\computervision\calibration\HandCalibrator.h" />
<ClInclude Include="src\computervision\calibration\StaticSkinTreshold.h" />
<ClInclude Include="src\computervision\HandDetectRegion.h" />
<ClInclude Include="src\scenes\in_Game_Scene.h" />
<ClInclude Include="src\scenes\scene.h" />
<ClInclude Include="src\computervision\async\async_arm_detection.h" />
<ClInclude Include="src\computervision\async\StaticCameraInstance.h" />
<ClInclude Include="src\computervision\FingerCount.h" />
<ClInclude Include="src\computervision\BackgroundRemover.h" />
<ClInclude Include="src\computervision\OpenPoseVideo.h" />
<ClInclude Include="src\computervision\SkinDetector.h" />
<ClInclude Include="src\computervision\ObjectDetection.h" />
<ClInclude Include="src\entities\camera.h" />
<ClInclude Include="src\entities\collision_entity.h" />
<ClInclude Include="src\entities\entity.h" />
<ClInclude Include="src\entities\light.h" />
<ClInclude Include="src\gui\gui_element.h" />
<ClInclude Include="src\gui\gui_interactable.h" />
<ClInclude Include="src\models\model.h" />
<ClInclude Include="src\renderEngine\loader.h" />
<ClInclude Include="src\renderEngine\obj_loader.h" />
<ClInclude Include="src\renderEngine\renderer.h" />
<ClInclude Include="src\shaders\gui_shader.h" />
<ClInclude Include="src\shaders\shader_program.h" />
<ClInclude Include="src\shaders\static_shader.h" />
<ClInclude Include="src\shaders\entity_shader.h" />
<ClInclude Include="src\stb_image.h" />
<ClInclude Include="src\toolbox\Timer.h" />
<ClInclude Include="src\toolbox\toolbox.h" />
<ClInclude Include="src\scenes\startup_Scene.h" />
</ItemGroup>
<ItemGroup>
<Xml Include="res\haarcascade_frontalface_alt.xml" />
</ItemGroup>
<ItemGroup>
<None Include="..\..\Avans Hogeschool\Kim Veldhoen - Proftaak 2.4\pose_iter_160000.caffemodel" />
<None Include="res\pose\coco\pose_deploy_linevec.prototxt" />
<None Include="res\pose\mpi\pose_deploy_linevec_faster_4_stages.prototxt" />
<None Include="res\pose\mpi\pose_iter_160000.caffemodel" />
</ItemGroup>
<PropertyGroup Label="Globals">
<VCProjectVersion>16.0</VCProjectVersion>
@@ -101,14 +143,18 @@
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<LinkIncremental>true</LinkIncremental>
<IncludePath>C:\opencv\build\include;$(IncludePath)</IncludePath>
<LibraryPath>C:\opencv\build\x64\vc15\lib;$(LibraryPath)</LibraryPath>
<IncludePath>C:\opencv\build\include;$(IncludePath);C:\opencv\opencv\build\include;C:\opencv\build\include</IncludePath>
<LibraryPath>C:\opencv\build\x64\vc15\lib;$(LibraryPath);C:\opencv\opencv\build\x64\vc15\lib;C:\opencv\build\x64\vc15\lib</LibraryPath>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<LinkIncremental>false</LinkIncremental>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<LinkIncremental>false</LinkIncremental>
<IncludePath>$(VC_IncludePath);$(WindowsSDK_IncludePath);;C:\opencv\opencv\build\include;C:\opencv\build\include</IncludePath>
<LibraryPath>$(VC_LibraryPath_x64);$(WindowsSDK_LibraryPath_x64);C:\opencv\opencv\build\x64\vc15\lib;C:\opencv\build\x64\vc15\lib</LibraryPath>
<IncludePath>C:\opencv\build\include\;$(VC_IncludePath);$(WindowsSDK_IncludePath);C:\opencv\opencv\build\include</IncludePath>
<LibraryPath>C:\opencv\build\x64\vc15\lib;$(VC_LibraryPath_x64);$(WindowsSDK_LibraryPath_x64);C:\opencv\opencv\build\x64\vc15\lib</LibraryPath>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
@@ -140,7 +186,7 @@
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>$(SolutionDir)lib\glfw-3.3.2\$(Platform);$(SolutionDir)lib\glew-2.1.0\lib\Release\$(Platform);%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>opencv_world452d.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalDependencies>opencv_world452d.lib;%(AdditionalDependencies); opencv_world452.lib;opencv_world452d.lib</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
@@ -181,6 +227,8 @@
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>$(SolutionDir)lib\glfw-3.3.2\$(Platform);$(SolutionDir)lib\glew-2.1.0\lib\Release\$(Platform);%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies); opencv_world452.lib;opencv_world452d.lib</AdditionalDependencies>
<AdditionalDependencies>opencv_world452.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />

View File

@@ -1,84 +1,70 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;c++;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd</Extensions>
</Filter>
<Filter Include="Resource Files">
<UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>
<Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>
</Filter>
<ClCompile Include="src\collision\collision_handler.cpp" />
<ClCompile Include="src\scenes\in_Game_Scene.cpp" />
<ClCompile Include="src\computervision\async\async_arm_detection.cpp" />
<ClCompile Include="src\computervision\ObjectDetection.cpp" />
<ClCompile Include="src\computervision\OpenPoseVideo.cpp" />
<ClCompile Include="src\computervision\SkinDetector.cpp" />
<ClCompile Include="src\computervision\FingerCount.cpp" />
<ClCompile Include="src\computervision\BackgroundRemover.cpp" />
<ClCompile Include="src\entities\camera.cpp" />
<ClCompile Include="src\entities\collision_entity.cpp" />
<ClCompile Include="src\entities\entity.cpp" />
<ClCompile Include="src\gui\gui_interactable.cpp" />
<ClCompile Include="src\main.cpp" />
<ClCompile Include="src\renderEngine\loader.cpp" />
<ClCompile Include="src\renderEngine\obj_loader.cpp" />
<ClCompile Include="src\renderEngine\renderer.cpp" />
<ClCompile Include="src\shaders\gui_shader.cpp" />
<ClCompile Include="src\shaders\shader_program.cpp" />
<ClCompile Include="src\shaders\entity_shader.cpp" />
<ClCompile Include="src\toolbox\toolbox.cpp" />
<ClCompile Include="src\scenes\startup_Scene.cpp" />
<ClCompile Include="src\computervision\calibration\HandCalibrator.cpp" />
<ClCompile Include="src\computervision\HandDetectRegion.cpp" />
</ItemGroup>
<ItemGroup>
<ClCompile Include="src\entities\Camera.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\entities\Entity.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\renderEngine\Loader.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\renderEngine\Renderer.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\main.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\shaders\shader_program.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\shaders\static_shader.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\renderEngine\obj_loader.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\toolbox\toolbox.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="src\computervision\ObjectDetection.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClInclude Include="src\collision\collision.h" />
<ClInclude Include="src\collision\collision_handler.h" />
<ClInclude Include="src\scenes\in_Game_Scene.h" />
<ClInclude Include="src\scenes\scene.h" />
<ClInclude Include="src\computervision\async\async_arm_detection.h" />
<ClInclude Include="src\computervision\async\StaticCameraInstance.h" />
<ClInclude Include="src\computervision\FingerCount.h" />
<ClInclude Include="src\computervision\BackgroundRemover.h" />
<ClInclude Include="src\computervision\OpenPoseVideo.h" />
<ClInclude Include="src\computervision\SkinDetector.h" />
<ClInclude Include="src\computervision\ObjectDetection.h" />
<ClInclude Include="src\entities\camera.h" />
<ClInclude Include="src\entities\collision_entity.h" />
<ClInclude Include="src\entities\entity.h" />
<ClInclude Include="src\entities\light.h" />
<ClInclude Include="src\gui\gui_element.h" />
<ClInclude Include="src\gui\gui_interactable.h" />
<ClInclude Include="src\models\model.h" />
<ClInclude Include="src\renderEngine\loader.h" />
<ClInclude Include="src\renderEngine\obj_loader.h" />
<ClInclude Include="src\renderEngine\renderer.h" />
<ClInclude Include="src\shaders\gui_shader.h" />
<ClInclude Include="src\shaders\shader_program.h" />
<ClInclude Include="src\shaders\entity_shader.h" />
<ClInclude Include="src\stb_image.h" />
<ClInclude Include="src\toolbox\Timer.h" />
<ClInclude Include="src\toolbox\toolbox.h" />
<ClInclude Include="src\scenes\startup_Scene.h" />
<ClInclude Include="src\computervision\calibration\HandCalibrator.h" />
<ClInclude Include="src\computervision\HandDetectRegion.h" />
<ClInclude Include="src\computervision\calibration\StaticSkinTreshold.h" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="src\entities\Camera.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\entities\Entity.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\models\Model.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\renderEngine\Loader.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\renderEngine\Renderer.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\stb_image.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\shaders\shader_program.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\shaders\static_shader.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\renderEngine\obj_loader.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\toolbox\toolbox.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="src\computervision\ObjectDetection.h">
<Filter>Header Files</Filter>
</ClInclude>
<Xml Include="res\haarcascade_frontalface_alt.xml" />
</ItemGroup>
<ItemGroup>
<None Include="..\..\Avans Hogeschool\Kim Veldhoen - Proftaak 2.4\pose_iter_160000.caffemodel" />
<None Include="res\pose\coco\pose_deploy_linevec.prototxt" />
<None Include="res\pose\mpi\pose_deploy_linevec_faster_4_stages.prototxt" />
<None Include="res\pose\mpi\pose_iter_160000.caffemodel" />
</ItemGroup>
</Project>