There are two kinds of people in the world — those who divide the world into groups of two and those who don’t. And, here is, now, my take on AI. There are two kinds of AIs — those that are built using a large set of samples, and the others that “learn” using just the minimal required set of samples. The former is known as Deep Learning or BIG DATA AI, and I call the other kind of AI that learns with minimal data as “MIN DATA AI”.
In the BIG Data AI, the learning is basically a process that examines a large set of samples and creates a formula using a large number of parameters many of them seem indistinguishable to the naked eye. It is a time-consuming trial and error process where the parameters are continually adjusted until training data with the same labels consistently yield similar outputs. A classic example of this is using a data set that has 50000 images of digits (0 to 9) and the AI model learns to recognize the digit.
The BIG DATA AI is all about “mathematically recognizing” incredibly subtle patterns within the mountains of data.
Using this “pattern recognition technique” you can now “provide” an answer, a decision or a prediction to an input data that you have never seen before. It is a great tool that produces a behavior that appears intelligent but is unlike the human brain in basic learning. These algorithms require mountains of data and high CPU/GPU power to train. It has no common sense, conceptual learning, creativity, planning, human-like intuition, imagination, cross-domain thinking, self-awareness, emotions, etc. The BIG DATA AI is basically an optimizer based on a large volume of data for a very specific vertical single task. It has trouble when it comes to extreme edges of data for which it had limited samples. The quality of the AI model is also dependent upon the quality of data. The bias is deep ingrained and can’t be selectively removed.
BIG DATA AI is a great tool that produces a behavior that appears intelligent.
There are often many use cases for which we need to model intelligent processes found in nature, particularly human ones. This is where the MIN DATA AI comes into play.
In MIN DATA AI, the AI models “learn” with minimal data that is needed to learn. A formula is indeed created however with minimal data. A classic example of this is how some people say they never forget a face even if they met in passing, or how you recognize a taste even if you have experienced it only once before.
MIN DATA AI is closer to how we learn and do problem-solving. I don’t need to drink 10,000 OLD FASHIONED (cocktail) to tell the drink in my hand is old fashioned (although may be in my case I already have) or to recognize that the old fashioned is from HaberDasher San Jose or NOT (and, yes, HaberDasher is a great place). With this AI, you will be able to make bets even though data says something else. Your AI can be closer to humans in risk taking, thinking out of the box, forgiveness, learning new inhibitions etc. Your learning can be super fast too. It does not require a large number of GPUs or ASICs. A learning that s
In the next 3 years, AI will rely less on BIG DATA and more on MIN DATA. This new MIN Data approach of AI will enable a whole new set of use cases that seemed unsuited before.
Here is how you can start your journey for MIN DATA AI:
- Stop looking for BIG DATA. Like any other addiction, it will be difficult to cope with BIG DATA addiction but you have to do it.
- Focus more on algorithms.
- Innovate how to create BIG DATA from MIN DATA.
MIND is perhaps MIN D(ata)
Future of AI making devices, processes and automation really intelligent and not just high confidence level decision making based on BIG Data is not far.
So that you can stay up to date on new progressions in this discovery, please visit my LinkedIn profile https://www.linkedin.com/in/artofai/ where I will be hosting events every month, including one coming up on the 30th of September called “Artificial OR Intelligent? The New Paradigm of Min AI”