The real promise of artificial intelligence is to automate the interpretation of information.
How is he going to do it?
What is Artificial Intelligence?
First, we need to know what artificial intelligence is; it is essential to ask what intelligence is?
A complex concept where they exist, there is no universally accepted definition, since it includes so many processes and attributes that make a single and limited definition difficult.
However, a right approach can based on the ability to think, understand, reason, employ the use of logic and, above all, to solve problems.
With artificial intelligence, the same thing happens.
There is no single definition.
Describe Artificial Intelligence
Among the various ways of describing it, the one I love most is that suggested in 1990 by Ray Kurzweil, the American inventor and engineering director at Google, since 2012.
Artificial intelligence is the way computers can build with functions that humans need intelligence.
The concept of artificial intelligence dates back to the birth of computing, the term being coined by John McCarthy in a research proposal written in 1956.
In that proposal, it suggested that “significant progress could be achieved if machines were to solve the problems that until now could only be solved by people… if a group of carefully selected scientists worked together for one summer.”
Colossal ingenuity, coupled with much higher promise than technology was capable of providing, condemned the term to intellectual ostracism among researchers, preferring to replace it with more discreet ones such as “expert systems”, or “neural networks”.
The turning point came in 2012, when The ImageNet Challenge, a research contest promoted by Stanford University, brought artificial intelligence to the forefront.
ImageNet is nothing but a virtual database containing millions of manually tagged images.
The contest challenges participants to develop automatic image tagging and recognition techniques.
In 2010, the winning team correctly classified the images 72% of the time.
Two years later, a team led by Geoffrey Hinton of the University of Toronto achieved 85% correct answers, thanks to a new technique called deep learning.
In 2015, the human success threshold exceeded for the first time (95% on average), with the winning team achieving a success level of 96%.
What is Deep Learning
Deep learning is a branch of machine learning that uses mighty computing power to handle representations of data, from which it can distinguish and establish hierarchies and patterns.
In the not-too-distant future, its main implication a be the automation of information interpretation.
Conversion of unstructured knowledge to logical concepts through image recognition and text comprehension It means that, for instance, artificial intelligence may analyze the entire human body instead of a doctor examining an x-ray to identify possible issues and concentrating only on the heart.
Deep learning based on a highly simplified nervous system structure.
Its architecture consists of artificial neural networks that try to reproduce the problem-solving process of the human brain.
A neural network made up of layers. The information enters through the input layer and a series of artificial neurons organized in “hidden” layers, process the information, applying different random numerical values or “weights” and sending the result to the output layer.
A deep network with many hidden layers can distinguish in great detail the properties of the input data.
Training a network involves adjusting the internal weights of the neurons so that it can respond in the desired way when specific input entered.
In the early 1990s, the usefulness of artificial neural networks reduced to tasks as simple as recognizing handwritten numbers.
Two decades later, several groups of researchers discovered that graphics processing units (GPUs) exponentially well suited to execute deep learning algorithms, giving them 100 times faster speed.
The same chips that used to recreate imaginary worlds are great for helping computers understand the real world through deep learning.
The deeper a network is, the higher it’s capacity for abstraction and for achieving better results.
Deep Medicine: How Artificial Intelligence can make Healthcare Human Again
Deep learning is proving to be very useful in solving a massive amalgam of problems of various kinds.
Google uses it to refine the results of its search algorithm, improve the interpretation of voice requests made by users to their personal assistant Google Now, improve their translator and help their autonomous vehicles to understand their environment better.
IBM’s Watson computer system, which is capable of answering natural language questions, managed to outperform the best contestants on the famous American television show Jeopardy!
Likewise, deep learning tested to develop new drugs by pharmaceutical companies like Merck.
Predicting and preventing medical problems, to traffic jams can be two of the most significant advances that artificial intelligence brings us very soon.
There are several deep learning methods.
Let’s see how each one works:
What is the difference between Supervised and Unsupervised Learning
Supervised Machine Learning
It is the most used technique. It consists of training a system through sets of classified examples.
One of the common areas of application is spam filtering lists, for which massive databases built with examples of messages classified as spam, or not spam.
A deep learning system can be trained using the examples and adjusting their weights within the neural network to improve their precision iteratively.
The main advantage of this method is that it does not require human intervention to create a list of rules, nor to program its implementation in code.
The system learns directly from the classified data.
Systems trained through the use of tagged information currently used to classify images, recognize voice commands, detect fraudulent credit card transactions, identify viruses and spam, and hyper-segment online advertising—applications in which the correct answer known from a large number of previous cases.
Facebook can recognize and tag your friends when you upload a photo and has just launched a system that describes the content of the images for blind users.
There are huge reserves of data that can be screened by supervised learning.
The adoption of this technology is allowing companies dedicated to computer security, marketing and financial services to reinvent themselves as artificial intelligence companies.
Unsupervised Machine Learning
It consists of training a network exposing it to a large number of examples, but without telling it what to look for.
Rather, the network learns to recognize features and group them with similar examples, thereby detecting hidden groups, links, or patterns within the data.
Unsupervised learning used to search for things unknown in appearances, such as tracking traffic patterns for anomalies that may correspond to a cyber attack, analyzing large numbers of insurance claims to detect frauds or clusters of furry faces that happen to be cats on YouTube.
It is a hybrid between supervised learning and unsupervised learning.
It is based on behavioural psychology and consists of training a neural network to interact with its environment, occasionally feeding it back with a reward.
His training consists of adjusting the weights of the network to find a strategy that generates more rewards more consistently.
DeepMind is the best example of the success of this approach.
In February 2015, he published a study in Nature, describing a reinforced learning system capable of learning to play 49 classic Atari games, using only the pixels on the screen and punctuation.
The machine-learned from scratch to play each of them and in 29 of them achieved a level equivalent to or higher than human.
In March 2016, his AlphaGo program defeated Lee Sedol, the second-best player in the world for Go.
Demis Hassabi s, co-founder and CEO of DeepMind, is currently developing a new method called ( learning transfer ), which allow a learning system reinforced leverage previously acquired knowledge, rather than having to re-train from the beginning in every situation.
It is, in short, what we people do by default and without effort.
A child can say that a Formula 1 is a car, without having seen one before. Computer science still can’t do it.
MetaMind, the startup recently acquired by the Salesforce, Works on a similar approach to the ‘multi-tasking’ learning in which the same neural architecture in a network used to solve the following problems in ways that can help solve the knowledge gained in a subject, even though it is different.
It is about exploring various types of modular architectures that are capable of ingesting sets of statements and answering questions about them by deducing the logical connections between them.
AI (artificial intelligence General)
The length of the most advanced research is to build an overall AI (artificial intelligence General), a system capable of solving a variety of problems.
The most optimistic hope to achieve in a decade reach a human-like level.
The fact that humans can learn from small amounts of information suggests that intelligence can be developed without the need for huge data sets, as startups Numenta and Geometric Intelligence are demonstrating.
At first, these rapid advances that we have discussed will present in incremental improvements to the online services we use daily.
Search results will be more accurate, and recommendations better personalized.
Without realizing it, in less than we imagine, practically “everything” will come with artificial intelligence incorporated.
Interfaces will evolve beyond icons and windows, into conversational and predictive models, making them accessible to people who cannot read or write.
These continuous improvements can result in sudden changes, at the moment when the threshold exceeded in which the machines are capable of carrying out tasks that previously could only be performed by humans.
Cases like self -driving cars or Robo-advisors are just a couple of examples of how automation will affect both low-skilled and highly-skilled workers whose occupations are routine.
The margin for the appearance of unexpected advances is loose.
Technology is light years ahead of regulations, and society is not going to look favourably on much of the progress that awaits us.