Machine Learning article
This a field of study which emerged in the 90’s and has its roots in recognition of patterns in data, computer learning theory, and Artificial Intelligence (AI). It is meant to be a study in which computers can learn without decidedly being programmed in that way. It focuses on the formation and design of algorithms that may learn from and make predictions based on collected data. Such algorithms work by gathering Intel from inputs and building a collection of data on which to base decisions or to make predictions on them.
Machine learning concentrates on the design of such computer programs and algorithms that are self taught to grow and adapt when given new data. The process is not unlike that of data mining. Both systems go through data provided to them or gathered looking for patterns. However in data mining applications data is extracted for human comprehension while machine learning algorithms use that data to find patterns in the data and change the program's actions accordingly.
It is closely linked with computational statistics, which is making predictions based on statistical data gathered by the computer. It is sometimes confused with data-mining but that is more focused on exploratory data analysis, whereas machine learning involves complex algorithms that are used basically for prediction, where machine learning concentrates on prediction on the basis of already known attributes learned through the training data(This allows data scientists to reproduce reliable repeatable decisions and results and infer hidden patterns from statistical data and trends), data mining instead focuses more on the finding of unknown attributes in any data.
As a scientific effort, machine learning arose from the aim for artificial intelligence. In the early days of Artificial intelligence already, as an intellectual field, researchers were very keen on having machines learn from given data. Therefore, they tried to approach the issue with different symbolic methods, as well as from methods that were at the time called neural networks these were usually just models that were afterwards found out to be repackaging of the general linear models of probability and statistics. Reasoning based on probability was also used, mostly in computerized medical diagnosis.
With an increasing focus on the logical and knowledge based approach a rift between Artificial intelligence and machine learning was caused. Probability based systems were infected by both the theoretical and the practical issues of data gathering and representation. By 1980, expert systems had come into existence to dominate AI, and statistics was out of luck. Workings on symbolic and knowledge based systems did continue, which lead to inductive logic programming, but the statistical field of research was by now outside the field of AI, in the line of pattern recognition and data retrieval. 755 Neural networks research had been left in abandon by AI and by computer science nearly at the same time. This line was also continued outside the artificial intelligence and computer science field, known as "connectionism", by most researchers from different disciplines.
Machine learning emerged as a separate field and started to expand in the 1990s. The line changed its aim from achieving AI to trying to do solvable problems of a more practical kind. The field then moved its focus away from the symbolic methodologies it had inherited from Artificial intelligence and instead moved toward methods and models taken from probability and statistics.
Use in Computer Science:
Applications of machine learning may lie in spam filtering, optical character recognition and in search engines. Machine learning is used by data scientists and data analyst’s in-order to determine which algorithm is the best for producing which results based on data quantity, quality and inherent nature of the data. This data is then used for predictive analysis in various ways for example recommendation systems like similar products on eBay, personalized content on google plus pages, video recommendations on sites like YouTube and last but not the least friend suggestions on Facebook. Also used for predictive searches in google and Bing search engines.
A more detailed example can be machine learning used by Facebook. Facebook's News Feed uses this approach to customize each user's feed. If a user often stops scrolling just to go through or like a particular Facebook friend's posts, the user's News Feed will then start to show much more of that Facebook friend's activity early in the News feed. What's happening behind the scenes is that an algorithm is simply using statistical and predictive analysis to identify and differentiate some patterns in the user's data and then use those patterns to fill that user's News Feed. Should the user no longer stop to go through, like or post comments on any Facebook friend's posts, the new collected data will become a part of the data set and then the User News Feed will change accordingly.
Based on the type of feedback signal or data given to the learning system there are three broad categories of machine learning. These are:
- Supervised learning: The machine is given sample inputs and their preferred outputs, by an entity called a “teacher”, and the aim is to learn a general rule that maps inputs toward outputs. These algorithms apply whatever they have learned in the previously to any new data.
- Unsupervised learning: labels/tags or explanations are not given to the learning algorithm regarding input, and it is left on its own to find a structure in it. used to discover hidden patterns in data. These algorithms can extract their own inferences or conclusions from given datasets.
- Reinforcement learning: A software interacts with a changing environment in which it must perform a certain task (such as driving a vehicle), without being told whether it is close to its destination or learning how to play a game by playing against someone.
There is also semi-supervised machine learning, where the “teacher” entity gives the machine an incomplete signal/data: a set with some of the target/outputs missing.
Machine Learning By YOUTUBE
One interesting advantage of machine learning is feature learning that is a system randomly initiated learning on some data will learn good feature representation for a given task. Such learning can be useful for things such as face/speech detection/recognition or image classification.
These days Data is just too much for humans to process and analyze by themselves. We would be nowhere without automated system to process and learn from all the data being produced.
Parameter optimization is similar to feature learning. Machine learning mostly uses a range or spectrum based method of optimizing a large number of parameters. It is not viable for humans to find such an optimal setting manually. For instance, recognizing a speaker from pitch, tone and amplitude.
There is no guarantee that machine learning will work in every case. Sometimes machine learning will fail, requiring understanding of the problem to be solved in order to apply the right algorithm.
There are very large data requirements. These learning algorithms require a lot of training data. It might be very difficult to work with or collect such large amounts of data.
Things like growing quantity and variation of available data, variety of processing that is cheaper and more powerful, and more affordable data storage mean nowadays we can quickly and automatically produce models and algorithms that can analyze larger, more complex data, provide faster and more accurate results on a large scale. Therefore, machine learning is fast becoming a very important and widely implemented part of our daily lives.
Machine learning will work in future?
Is this article helps you gain knowledge about machine learning?
© 2016 AADESH KUMAR