Skip to content

The Ultimate Quick-Start Guide to AI, Machine and Deep Learning

Dele Omotosho
Dele Omotosho
15 min read

If you ignore the role Artificial Intelligence plays in today’s business landscape, you’re doomed.

Despite the general awareness of AI, many often forget that you can’t just hire a couple of data scientists to make inroads in the industry. 

Here’s the small secret: 

Data scientists are good at working on machine learning, statistics, and finding patterns in data. However, they might lack the decision-making skills of a business leader — this is still required to create a production-grade system.

As a decision-maker, you need to be aware of the patterns found in the data and turn this into insights that you can use in your business. This is now the new normal. 

In this article, I walk you through everything you need to know in the world of artificial intelligence without any of the usual technical jargons.

Let’s dive in.

What is Artificial Intelligence (AI)?

Artificial intelligence is a science that describes the computer technology, which makes it possible for computers to think and act like humans. It can also be described as the ability of a machine to perform cognitive functions that includes speech recognition, planning, learning, interactive, and problem solving, among a variety of other tasks.

Some of the most popular examples of artificial intelligence are automobiles without drivers that can interact with the environment just like a human driver. Less technical, but equally important examples are a variety of automated robotics tool that help build automobiles in the factory.

Artificial Intelligence—A Very Brief History

You should seriously think about adapting artificial intelligence because human race is at an important junction of a technological breakthrough. To understand this transition, we need to look at the history back to 1805 when Legendre laid the groundwork for machine learning. However, its widespread use did not become possible until 1991 after the introduction of the Worldwide Web. Since then, the Internet has resulted in the explosion of data, which helps explain the recent growth of artificial intelligence.

Today, we are at an important crossroads in history of computing because the three powerful components of artificial intelligence—explosion of data, algorithmic advancement, and an increase in computing power—have become a reality. In fact, the groundbreaking advancement in the field is attributed to the confluence of these three components.

While computing algorithms had already started to take shape as early as 1805, an exponential increase in computing power was only made possible in 1965. In 1965, Intel cofounder, Gordon Moore, discovered that there will be an exponential growth in chip power every few years. Later, scientists confirmed his theory, called Moore law, confirming that the power will roughly double every 18 months.

The introduction of Internet paved the way for a union of these three components. Experts often believe that the confluence of these three vital components in artificial intelligence was reflected in 2009, when Andrew Ng and his team at Stanford, discovered that GPU can process the data much faster than a typical CPU, used in modern-day computers. According to his research, GPU can be used to train deep-belief networks 70 times faster than a typical central processing unit. As a result, machine learning can theoretically improve 70 times than what was originally thought.

In 2017, Google introduced TPU, tensor processing unit, which is reputed to be 10 to 30 times faster than GPU. It will be offered through Google cloud based services, in the near future. Overall, it means that the next four to five years are crucial to the success of companies who have an opportunity to become early adapters of a technological breakthrough in human history.

What is Machine Learning (ML)?

Any discussion on artificial intelligence starts with the basics of machine learning models. These models are the foundation of AI, and you must understand these terms to make a viable decision.

Remember: Machine learning is a smaller subset of AI.

The theory behind machine learning involves training robot and machines to respond to events based on the available data and past interactions. Initially, machines are given large amounts of data to learn and understand patterns. Using the data, these machines predict and recommend things to do. In fact, artificial intelligence has evolved to an extent where the machine is able to learn and improve from new sets of data and its interaction with the environment, based on previous actions.

Machine-Learning Models

There are usually three machine-learning models used in the industry. The most basic is descriptive mode, which describes what and how things have happened. Despite its simplicity, it is an important part of the machine learning process because it can provide useful insight to humans enabling them to improve their productivity. Nowadays, descriptive machine learning model is deployed in many industries around the globe.

A more advanced version of this model is the predictive technology, which predicts future events based on the probabilistic theory. The model is specifically useful in data-driven industries that rely heavily on probability, such as casinos and lotteries. The most advanced of the three machine learning models is the prescriptive model. The model not only provides information of how things will shape in the future, but it also recommends what and how to do it. Often robots and machines used prescriptive model of artificial intelligence so they can make independent decisions.

Types of Machine Learning

Supervised Learning

It is the simplest of AI models based on the input and output modes. The system learns the relationship between input and output from an extensive set of data. Based on the data, it finds a correlation between input and output to generate a sample or an outcome of an event. Examples of supervised learning is vastly used in academic and research. Examples of the model include algorithms such as linear regression, naïve bayes, random forecast, and decision-tree, among others.

Unsupervised Learning

This is an advanced version of the original system, which only requires an input to work. Using data input, the system generates a set of patterns to automatically produce an output. It is mostly useful in environments where the user has to work with a jumbled cluster of data or where the data does not make any sense. For instance, a user may get millions of different set of numbers regarding customer interaction with different products in the super store. Using unsupervised learning models, the researcher can classify random data in specific groups, which are easy to understand. K-means clustering and Gaussian mixture model are examples of unsupervised learning.

Reinforcement Learning

This system learns from its environment by reinforcing its actions. It is used in an environment where the only goal of the user is to improve the existing result. Reinforcement learning is used when researchers don’t have a reliable set of training data, and they are unable to define the ideal state. As such, the algorithmic model performs a task with the sole objective of getting a reward for improved results. The cycle continues infinitely, or until the user is satisfied. Business use of the reinforcement model is clearly evident in the functioning of option-trading portfolio, self-driving cars, auctions, and load-balancing for electricity grids.

Diving Deeper Machine Learning to Deep Learning

Deep Learning is another subset of Machine Learning

In today’s business environment, you will definitely come across AI models featuring deep learning. In simple words, these are advanced models of machine learning you must learn to oversee future projects.

Mostly used to perform complex functions, deep learning requires a massive amount of data to process. As explained earlier, the introduction of Worldwide Web and related technologies have allowed us to process data on an epic scale. As the new data becomes available, deep learning will also continue to improve.

The most significant difference between deep learning and machine learning is based on the presence of a large number of inter-connected layers in deep-learning models. The model uses multiple neural networks to share and understand the given data. At each level of the entire system, multiple stand-alone networks independently process data before sharing it with other networks. It also means that deep learning often involves separate decision-making process to understand and implement the data before making a coherent decision.

When compared to machine-learning methods, deep-learning can easily reduce error rates to improve the productivity. Research shows that there is 41% reduction in errors for image classification using deep learning. Likewise, errors are also reduced for facial and video recognition by almost a quarter.

Types of Deep Learning

Convolution Neural Network

The system can be described as a multi-step linear model, which uses many neural networks to process data at each step. During each stage, complex features of data are extracted to provide a powerful output. This model is mostly used where only unstructured data, images, and visual clues are available for processing. Facial recognition is the most common example of the model.

For facial recognition, the system processes series of pixels. At each step, it may identify and verify unique features of different clusters of pixels to create an output. Comparing the output with the original data, it identifies the image. Later, it can reinforce its learning using the previous output. Business use case of this model includes health diagnostics from medical scans, detection of defective products, and understanding customer’s brand perception using images.

Recurrent Neural Networks

When researchers are working with a time-series or sequential data, recurrent model may offer the best solution. It is composed of a multi-layered network that uses context nodes in different locations. Each node is able to store complex information enabling it to react with other data clusters when required. It is increasingly used in business scenarios involving language translation, verification of credit cards, and power chat bots.

Applications of Modern AI Techniques in the Industry

If it’s your first time implementing the technology, focusing on three main areas of AI will help reach your goals quickly. Here is an explanation of these three critical areas:

Core Business

A recent Infosys report indicated that 73% of C-level executives confirm they have implemented some form of artificial intelligence in their core business. For instance, Morgan Stanley has created an intelligent tool for its brokers, which can easily determine optimal client allocation portfolio using the existing allocation, tax preference, and preference. Using the up-to-date data, company’s brokers can send customized advice to their clients without having to drill data themselves.

In fact, Pfizer is taking artificial intelligence to an entirely new level using IBM’s Watson for Drug Discovery tool. It is the first pharmaceutical company that is destined to revolutionize cancer treatment through machine-learning process. In its struggle to fight cancer cells, IBM’s tool will become the basis of immunotherapy. Immunotherapy is a process that changes the immune system of the patient by recognizing effective patterns of killing the dangerous cells. IBM’s tool with allow researchers from Pfizer to analyze massive amounts generating meaningful insights and safety assessments.

Human Resource Recruitment & Training

Recent research suggests that nearly half of large companies in the United States have implemented machine-learning to enhance their human resource endeavors. Many companies are using AI to help them pre-screen potential job applicants or evaluate them during the interviewing process. In addition, increasing numbers of human resource managers are using artificial intelligence to train employees. In addition, 62% of talent acquisition companies are planning to invest in AI software in the near future.

Companies such as IBM are using chatbots to engage candidates who it thinks are a good fit for the company. The chatbots is a success as it is helping reduce the time required to evaluate candidates. When BuzzFeed used IBM’s Watson Candidate Assistant on its website, it saw 64% more candidate progress during the interview selection funnel. The AI software is also used by small tech companies like Kore.ai, where in-house chatbots interact with employees to answer common questions and conduct interviews.

Critical Decision Making

Artificial systems can give support to managers in critical decision-making process by offering data-driven streamlined solution to complex problems. A recent PwC Report found that 62% of C-level executives will use artificial intelligence to improve productivity.

Already, companies are using artificial intelligence to test different hypothesis. For instance, Coca-Cola used AI technology to successfully launch Cherry Sprite. Instead of asking consumers for their preference and taste, Coke used the self-serve soda foundations to collect data from consumers. In this case, artificial intelligence improved results making Cherry Sprite, a successful product launch.

While AI systems are critical to the success of managers, it should not be seen as a replacement for emotional intelligence and adaptability of you, a human.

Implementation of Artificial Intelligence

Despite the technological achievements, you must understand that it will be up to a human to make critical decisions. AI systems are only to help in the decision-making process because even the most advanced systems need some kind of input that only a human brain can provide. Accordingly, here are the things that you need to consider before launching your project.

Should You Invest in AI?

Implementation of artificial intelligence should align with the business objective and plan. Accordingly, there should be proper planning before rolling out features of the AI to different business sectors. If different departments are involved, careful analysis is required to ensure harmonious transition at multiple levels. Until now, researchers, data scientists, and engineers are mostly involved in the process: however, smart executives will ensure that every person can learn and take advantage of the system in the company.

Artificial Intelligence and Business Goals

You don’t want to update your existing systems just because AI is a trend. Before implementing AI system in your company, make sure that artificial intelligence will solve the problem. If artificial intelligence cannot increase your productivity, the transition from traditional systems can quickly become very costly. Accordingly, it makes sense to evaluate your goals and the impact of AI before integrating the new technology.

Once you are ready to implement, always try to test it on small-scale projects before going mainstream. Despite their reputation, successful companies such as Google, Microsoft, and Intel are known to test new technology on a much smaller scale before upgrading to the next level. Even if your intuition tells you that you can resolve problems after implementing AI, it helps to reevaluate the urge to integrate AI. you need to be cautious because any large-scale modification in the system can disrupt the working and can often prove irreversible.

The First Steps

There are many simple tasks that are time-consuming due to the traditional setup. These repetitive tasks can be solved using AI. As machine learning has intensified, AI solutions are increasingly becoming a part of the back-office functions, which includes accounting, IT support, logistics, and operations.

Before overseeing such tasks, you also need to understand the basics of artificial intelligence. It is not uncommon for many executives to take training in natural language processing and machine learning technology. Learning these modules will definitely help in decision-making once you hire trained professionals who are working with seemingly complex patterns such as data-mining, writing algorithms, and trend-detection. Without the basic knowledge of AI, you will be hard pressed to make your own decisions.

As discussed, most AI projects start with a small task; therefore, learning AI will also help executives take charge of small setup. By overseeing and managing small modifications, it will be easy to grasp the full potential of the system, and come up with grand ideas. It will also be easier for employees to keep their focus on the project without losing sight of the core operations.

Understanding Artificial Intelligence and Business Landscape

If you have made up your mind to take charge of the proceedings, don’t forget that successful implementation of AI models also requires three fundamental capabilities. From the perspective of machine learning business ideas, these three capabilities are data unification, real-time insight, and business context.

Data Unification

The soul of artificial intelligence is the data, which needs to be unified in an understandable format. Before the widespread use of artificial intelligence, collecting such data was a laborious task as companies often had to spend hundreds of man-hours pursuing the data. As technology progressed, numerous data service providers emerged that offer streamlined solutions for companies.

For e-commerce businesses, Clickstream data is an example of data unification. It allows decision makers to analyze the click path of customers while browsing the website. Similarly, there are numerous customer journey analytics platforms that can offer important data free of cost.

Real-Time Insight

Raw data cannot help a business increase its productivity unless you are able to get real-time insight into the behavior of a consumer. As a first step towards artificial intelligence, you need to understand types of data that is used in AI systems.

After deciding which data to use, you must decide how they are going to decipher the information. These days, plenty of data analytics software provides artificial intelligence to gather and respond to a customer at every touch point of their buying journey.

For instance, Capillary offers a retail analytic solution to better understand consumer behavior. Businesses can integrate Capillary’s system to capture in-store data and gather information from hundreds of different touch points. The end result is a single-view of the behavior of the consumer as they pass and interact with different touch points.

Real-time insight helps yous understand the psychology of consumers enabling them to engage customer at the right moment of their shopping experience.

Business Context

Gathering the data and engaging customers at various touch-points is merely the start of a potential positive relationship; however, businesses can only succeed if artificial intelligence can guide the consumer in the right direction without human intervention. The final part of a successful AI implementation is the business context. Business context can be defined as a cross-platform journey that can produce mutually beneficial results for consumers and the business.

It also means that humans only need to work in the background to improve customer-interaction and let the computers dictate the best path. From a business context perspective, the success of such a system is determined after evaluating productivity and other success parameters.

Buying vs. Building—Machine Learning Applied

When it comes to making the decision to buy or build AI system, you should base their decisions on the potential success of the project. In addition, take control the ownership of the system instead of relegating full control to the vendor.

Based on these preferences, there are generally four options for you. 

Commodities

Most companies prefer this model as it allows them to share their data with vendors without revealing anything that may allow the vendor to grab a competitive advantage. In this model, the company has almost full-control over its database.

Goldman Sachs uses the commodity model to screen potential candidates using artificial intelligence. Its vendor, HireVue helps Goldman Sachs to select preferred candidates based on word choice, facial expressions, and other characteristics. HireVue has over 20 million video profiles, which cannot be replicated by an individual company.

In this partnership model, Goldman Sachs used AI solutions without the fear of losing competitive advantage to HireVue as both companies differ in their respective objectives.

Hidden Opportunities

Using vendor’s database, companies can access data, which is of critical importance. In this model, companies not only analyze chunks of the data, but they can sell it to others. For instance Woodside Energy gained access to IBM Watson gaining expert knowledge of oil platform operations collected over 30 years by Watson.

Relying on Watson natural language technology, Woodside Energy offered data access to all of its employees. Accordingly, employees were able to construct and interact with data to answer important questions related to the operations. In this scenario, Woodside Energy maintained control over the data without sharing the information with IBM. The end result was massive data that is very valuable to other industry players allowing Woodside Energy an opportunity to sell the data to others.

Danger Zone

A recent research at Oxford University claims that half of the people in the United States will lose their jobs to artificial intelligence in the coming decade. For you, this is an eye opener because they need to understand that transformation of human resource has started to take place. In order to survive, businesses must integrate AI in their workspace as most automated tedious tasks are actively replaced by technology.

You must understand the dynamics of AI to ensure that you hire most productive workforce that is educated and trained to solve cognitively complex tasks. Understandably, some cognitively complex tasks cannot be relegated to machines; therefore, executives will require a workforce that is able to cope with the advance technology. In fact, the workforce of tomorrow will be trained in STEM, science, technology, engineering, and mathematics. The trained workforce will be your most important asset in collaborating with vendors, particularly if this collaboration is in danger-zone.

Machine diagnostic of radiological images is an example of this relationship where the vendor is able to create a high-quality image library that can be superior to any of the healthcare provider. In such cases, where companies have no choice but to use third-party technology, they should be careful in how much they want the vendor to know about their operations. As a result, a knowledgeable workforce trained in AI will be able to keep the balance in the relationship with vendors. Without investing in AI technology and STEM workforce, you are likely going to lose quickly to more intelligent counterparts.

Gold Mines

If the company has access to a large database of critical information, it can reliably build its own AI system without sharing control with the vendor. A tire manufacturer successfully used this model to build an AI system that could tell tire vendors about the potential demand of a tire in respective stores. The system predicted the tire demand based on anticipated tire wear from nearly 1.6 billion data points. As a result, sales and productivity increased substantially.

Before following this model, you should ensure that they have a team of programmers and data scientists who understand the nature of the project. In the long run, these employees will be critically important to the company because of their knowledge and expertise in building such a system. Sometimes, it is also better to create a project with other like-minded vendors. For instance, companies in self-driven vehicle market can collaborate together to mine extensive data instead of relying on vendors. This type of collaboration often increases the pace of development.

If you’re still wondering what is machine learning, and how it can help you, think fast because it is estimated companies will dedicate up to 20% of their workforce to AI, in the year 2020. According to Economist, 75% of executives suggest that AI will be actively implemented in their companies within the next three years. 

Make sure, you are on the losing end. In fact, artificial intelligence is not the future, it’s now.

Now it’s up to up, let me know in the comments your current AI, ML DL challenges.

Dele Omotosho

I help software businesses in emerging markets breakthrough sustainably & profitably