So last week I have set out to implement Optical Flow algorithm to speed up my application. After long hours of trying to make it run on ios, I have finally managed to do it. I have created a class called OpticalFlowTracker which contains several functions. Those functions help me to track key points previously detected by ORB tracker class.
The idea behind all of this is that I will use OrbTracker’s descriptor and extractor to detect, recognise and match different images. Once the right image has been detected, we will extract only four points and pass them to Optical Flow Tracker. We will do this because Optical Flow Tracker is a much faster algorithm and requires less CPU power to track images real-time. This approach has sped up my app dramatically!
Once I was done with the implementation of the Optical Flow Tracker, I have created a simple sound player with audio-reactive visuals. The current state of the app is looking nearly as good as in the addon provided by Vuforia’s QCAR that I’ve mentioned in my previous posts.
Now that these two things work together, I can move onto another really important part of my program which is multiple image recognition. I have to build a database of images with music related to them. It is going to be quite challenging as it is going to involve using more computer vision algorithms provided by OpenCV library. However, once the main core of my app is going to be done so, I will be able to start focusing on aesthetics and user interaction design.
I don’t really have much to write about this week as I’ve been working quite hard to implement this Optical Flow Tracker. And so now that I’m done I can move onto new challenges. I’m right on my planned schedulle so far, and so the next steps for the upcoming week are as follow:
* Implement multiple image recognition
* Sound player + visuals
* Start planning my final year project report