Introduction

AI is something we see in the news all the time, from autonomous vehicles, to face id, to ChatGPT taking the world by storm. However, what is AI? What do the terms machine learning, deep learning, and natural language processing mean? How did AI start in the first place? Are there different types of AI? These are all topics that will be discussed in this article to give you an introduction to this fascinating field of the future. Keep in mind this article is just a high-level overview and gets rid of much complexity for the goal of gently introducing this complex field.

Definition

Artificial Intelligence is when a machine is able to perform cognitive functions that we associate only with humans. AI has three factors that make it AI: generalized learning (adaptability), reasoning, and problem solving skills. AI is able to function in different environments and adjust accordingly. AI is also able to logically reason through decisions and pick the best one in order to fulfill some sort of task. And finally, AI can solve problems by finding solutions specific to that problem.

History of AI

Alan Turing and Initial Setbacks

AI first started with a British polymath called Alan Turing (who is often considered the father of computer science as well as the father of AI) when he published a paper in 1950 called Computing Machinery and Intelligence where he proposed that there is no argument that can be made that says machines cannot think like humans. However, these aspirations for human-like intelligence could go nowhere due to the time period’s lack of computer storage. Before 1949, computers usually could not store commands but rather just execute them. They would be given a task, but not remember that task. Also, computers were very expensive due to the fact that if you wanted to lease one, it could cost you $200,000 a month. This left only well-known universities and huge technology companies the ability to acquire a computer. So for a little while, AI was stuck.

Dartmouth Conference

However, in 1956 the first AI program was born and it was called the Logic Theorist. It was created by Allen Newell, Cliff Shaw, and Herbert Simon and it sought to mimic the mathematical abilities of humans. The Logic Theorist was showcased at a conference at Dartmouth college which was hosted by John McCarthy and Marvin Minsky. It was John McCarthy who coined the term “artificial intelligence” at this conference. Although there were disagreements about how this new field would look like, it still gave the real possibility of artificial intelligence being a new addition to the tech world and it got the ball rolling for further advancements in this field.

AI Advances

From the late 1950s to mid 1970s AI prospered due to advancements in computer storage, speed, and accessibility. The application of machine learning algorithms (which we’ll discuss later) got better as well as the ability to apply certain ones to certain problems. More AI applications emerged, such as Allen Newell and Herbert Simons General Problem Solver, which as the name intends, solved structured problems such as logical proofs and mathematical word problems. We also see Joseph Weizenbaum develop ELIZA which simulated conversation by using pattern matching and language interpretation. ELIZA truly made early users believe they were actually talking to someone. These successful examples of AI convinced certain government agencies such as the Defense Advanced Research Agency to fund AI research. Optimism for AI was so high at this point that Marvin Minsky (co-host of the Dartmouth conference discussed earlier) said “from three to eight years we will have a machine with the general intelligence of an average human being”. There was still a long way to go before decent natural language processing, and other AI would be a thing. Despite these advancements and excitement around AI, there was simply not enough computational power to do anything substantial. Due to this, patience and funding started to fade away for a decade.

AI Reignited

The 1980s saw AI reignited through new algorithmic advancements and new streams of funding. John Hopfield and David Rumelhart gave rise to deep learning (more on this later) which gave computers the ability to learn through experiences. We see Edward Feigenbaum introduce expert systems which sought to mimic the decision making process of a human expert in a certain field. This allowed people who were not well-versed in a field to be able to ask this program advice and have it answer questions. The Japanese government heavily funded expert systems and other AI technologies for their Fifth Generation Computer Project (FGCP). From the early 1980s to 1990s they invested $400 million with the goal of improving computer processing and AI. And although most aspirations were not met, it still inspired a generation of engineers that became curious with this emerging field. Although government funding started to dwindle, AI still scored many achievements going into the 1990s and 2000s. In 1997 we saw IBMs Deep Blue chess program beat the chess world champion, Gary Kasparov. We also see speech recognition software developed by Dragon Systems to be implemented on Windows. All in all, AI was progressing at a steady pace.

Moore’s Law

The speed of AI’s development in history (and currently) is not all about how we change coding AI, but also about how much more advanced our computers get. Moore’s Law states that the number of transistors on a microchip will double every two years as well as cost being halved. This helps to explain the ups and downs of AI as we are technically stuck AI-wise with advancements until computational power advances as well.

Subfields of AI (with examples)

Machine Learning

Machine learning is centered around machines learning from lots and lots of data. Machine learning AI is able to recognize patterns and make decisions with human interference at a minimum. This is possible due to programmers using math such as statistics, linear algebra, and calculus to create machine learning algorithms. There are various different algorithms that machine learning engineers, data scientists, and programmers in general can use to analyze certain types of data and what specifically they want to predict from that data. These algorithms can be classified as either supervised, unsupervised, or reinforcement learning (more on this in a future article). Some examples of machine learning include self-driving cars, facial recognition, spam email filtering, image recognition, and product recommendations.

Deep Learning/Neural Networks

Deep learning/neural networks are a field of AI that is the tech-version of neurology, or the study of the nervous system (aka the brain). This subfield is inspired by cognitive science to create structured neural networks as the goal of them is to mimic the human brain to the best of its ability. It does this by using interconnected nodes (neurons) in a layered structure where each node acts as a mathematical function that obtains and sorts data according to a certain structure. To perform tasks, neural networks rely on various statistical techniques. Some examples of neural networks include: risk analysis, stock-exchange prediction, and fraud detection.

Natural Language Processing

Natural language processing is a subfield that deals with analyzing and interpreting human language. Programmers use natural language processing to extract certain information they want from a piece of text or use it for text manipulation. Some examples of natural language processing include: chatbots (like ChatGPT), text translation, and speech recognition.

Computer Vision

Computer vision is a subfield of AI that allows a computer to identify visuals and analyze them. This happens through pattern recognition that comes from deep learning/neural networks. The goal of computer vision is to draw information from images and give analysis based on certain images. Some examples of computer vision include: MRI scans, X-rays, and medical anomaly detection.

Types of AI

Reactive Machines

Reactive machines are AI that don’t have memory and they just perform a certain task. Reactive machines cannot predict the future unless it’s given the necessary information. A great example of reactive machines that was mentioned earlier is IBM’s Deep Blue. Deep Blue could analyze the current chess board to make a decision (react), but could not remember its past mistakes as a guide for future decisions.

Limited Memory

Limited memory, unlike reactive machines, can look at the past and analyze it. In fact, the more data it has, the smarter it becomes since it can train itself with this data. The analysis of past and present data is used by the machine to execute its next action. However, you shouldn’t look at limited memory AI as having a bank of memories it derives experiences from, but rather the past and present data helps to train this AI to make better decisions in the future. This AI uses deep learning/neural networks. A good example of it today is self driving cars.

Theory of Mind

Unlike the first two types of AI, the next two don’t exist yet. However, if theory of mind AI becomes a reality, it could potentially understand the world and human emotions and thoughts. It could also potentially understand human intentions and behavior.

Self-awareness

The final stage of types of AI is self-awareness, where it is conscious of its existence and it has a sense of self. We are however a long way away from this type of AI.

Conclusion

AI is an ever-growing field with much complexity. We see it changing industries and making sci-fi seem real. We hope that having read this article you have gained the basics of where AI came from, the subfields within it, and the types of AI. As we go into the future we hope to see AI be used for the common good and help society prosper.

This was a very high-level overview of AI and it has much, much more complexity to dive into. We at AI Tech Spark will keep posting articles discussing AI-related topics in more depth. We will also post other tech-related topics. Thank you so much for reading and please leave a comment on any questions you have!