Hand Tracking 30 FPS on CPU in 5 Minutes | OpenCV Python | MediaPipe (2021)

36 Просмотры
Издатель
In this lecture we will learn Hand Tracking using OpenCV Python in real time. We will first go through the fundamentals of Hand Pose Estimation and you will learn how to install and run the bare minimum Computer Vision code with MediaPipe Hand Pose in 5 Minutes and on CPU. That's right, no GPU required!

⭐Download the Code at the AI Vision Store -
https://augmentedstartups.info/VisionStore

⭐OpenCV Kickstarter Campaign -
https://bit.ly/OpenCVKickStarter

⭐FREE AI-CV Course -
https://augmentedstartups.info/yolov4release

⭐Membership + Source Code - https://bit.ly/Join_AugmentedStartups

⭐AI-CV Nano Degree - https://bit.ly/AugmentedAICVPRO

If watched Iron Man or Minority Report, you would see how they manipulate holograms and virtual objects just by using their hands. Hand recognition and object manipulation is not something new. Virtual Reality headsets like the Oculus quest 2,released their relatively new feature which allows you to use your hands instead of a controller. This is really cool because it so much more intuitive to use our hands rather than controllers to interact with virtual objects. The easiest way you could implement your own hand recognition is by using Leap Motion which has great performance, but your app will require an external sensor and is quite processor intensive. But what if there is a way to do this by just use a single camera.

So, Today, if you watch till the end, Im going to show you how to implement Hand Pose Estimation at 30 Frames per Second on CPU using Python. But first, lets find out what is, how it works and where you would use it.

In Basic language, Hand Pose estimation is the task of finding keypoint features of our hands from images and video. So these keypoints or landmarks correspond to the joints in our hands. You know the parts that move. They say that when you detect a thumbs up, it will get people to like, and subscribe to this video. haha Jokes aside.

There are many frameworks out there that already allow you to do single camera hand pose detection and tracking such as :
-HandMap,
-awesomehandpose,
-hand 3d
-amongst many others.

But for our implementation we are going to be using MediaPipe, which is Googles OpenSource cross-platform framework for building cool computer vision apps. We'll be using MediaPipe Hands that employs machine learning to infer 21 3D landmarks of a hand from just a single frame. Whats really amazing is that you don't need any expensive equipment like scarce and emotionally unavailable Nvidia GPUs.

In a nutshell how it works is that the model first performs palm detection using a slower single short detector but optimized for real time mobile use. Palm Detection was selected over a hand detection because estimating bounding boxes of rigid objects like palms and fists is significantly simpler than detecting hands with articulated fingers.

Also they achieved an average precision of around 95%. Which is excellent.
Next they pass the region of interest to the hand landmark model which performs keypoint localization of the 21 hand-shaped coordinates inside the detected region. This method as they mention is robust enough to detect both partial and fully visible hand movements.


⭐ JOIN our Membership to get access to Source Code : https://bit.ly/Join_AugmentedStartups

------------------------------------------------------------
Learn Advanced Tutorials
►https://www.Augmentedstartups.info/Teachable-AI-Bootcamp
Support us on Patreon
►https://www.AugmentedStartups.info/Patreon
Chat to us on Discord
►https://www.AugmentedStartups.info/discord
Interact with us on Facebook
►https://www.AugmentedStartups.info/Facebook
Check my latest work on Instagram
►https://www.AugmentedStartups.info/instagram
------------------------------------------------------------
#handpose #opencv #robotics
Категория
3d принтер своими руками
Комментариев нет.