We are amid a fundamental transformation. Applications based on artificial intelligence technologies are rapidly evolving, with opportunities for enhancing nearly every commercial process and unlocking significant value for those organizations able to integrate it successfully into their business. It is a market that is projected to be worth more than $2 trillion (US) by the end of the decade and will cause massive disruption across labour markets, eliminating but also creating millions of new jobs.
Every business needs to have an artificial intelligence AI strategy. This strategy will vary in size, scope, and complexity based on the needs of the specific organization, of course, but it is critical to understand the capabilities, benefits, and potential risks of AI-based applications – especially as your clients, partners, and employees are already using this technology to some extent.
What is ‘Artificial Intelligence’?
There are several definitions of Artificial Intelligence (AI), but the common thread is an attempt to create software applications that can approximate (and potentially surpass) human abilities in creative reasoning and problem-solving. The critical difference between AI and other computer programs is their ability to use their past ‘experience’ through training to effectively solve problems that they have not been pre-programmed for – unlike traditional software where a human must provide clear instructions for every eventually the program may face, or an error will be returned.
This ‘general AI’ that can fully mimic humans’ wide-ranging logic and reasoning capabilities in a single program currently does not exist. When the media and companies talk about AI today, they are referring to applications built to fill a specific need, such as ‘chatbot’ to engage and answer questions for humans, image analysis designed to identify medical issues from x-ray or MRI scans, or programs designed to estimate the probability of a maintenance failure based on multiple sensor inputs. That said, as software such as ChatGPT has shown us, AI has evolved to mimic the capabilities of humans to a startling degree.
It is also helpful to recognize that the term AI encompasses various sub-fields, including machine learning, natural language processing, computer vision, etc. For the sake of simplicity, I will refer to all as “AI” but differentiate between the flavours of AI where appropriate to understand how it works and why it is the right tool for the job.
A Brief History
A very brief history of AI helpful to understand why it is so capable today and the current applications AI is best suited for vs. what areas AI still struggles in.
1950s: The start of the modern AI movement. Alan Turing publicly wonders if machines can think and creates his famous “Turing Test” to help determine if a computer has achieved a ‘human’ level of intelligence. The term’ artificial intelligence’ is coined, the first AI conference occurs, and the first “AI” software programs are developed.
1960s: The first development of ‘neural networks,’ an attempt to build software programs that mimic the architecture of our human brains and a fundamental design for future AI applications.
1980s: Advancements in computer processing technologies allow for advancements in AI. New technologies that enable neural networks to ‘learn’ through backpropagation are developed, further mimicking how the human brain learns.
1990s: AI development continues to be closely correlated with rapid increases in processing power available. IBM’s “Deep Blue” algorithm beats world chess champion Gary Kasparov.
2000 – 2010s: Explosion in development and interest in AI based on accessibility to large datasets and processing power, especially with the creation of commercially accessible ‘cloud’ storage and processing resources such as Amazon Web Services, Microsoft Azure, and Google Cloud, dramatically lowering the cost and cost of entry for training and deploying AI applications. Algorithms beat champions in Jeopardy and impressively in the complex game of Go. Image recognition technologies, such as convolutional neural networks, undergo rapid development allowing for the development of self-driving cars and other vision-focused applications.
2020s: Development of new techniques, including ‘transformers’, allow for the development of large-language-model (LLM) based applications such as Open Ais ChatGPT and Google Bard and an explosion in the use of ‘generative AI’. These applications not only effectively mimic human conversational styles, but they can also use their massive training databases to synthesize new content. Image-based generative AI applications also propagate rapidly, allowing new images to be synthesized via text prompts from an extensive database of previous imagery.
NEED LARGER, CLEARER IMAGE
There are a few key take aways from the history above.
1. Development of AI has been closely associated with advancements in computer processing, storage, and accessibility. AI algorithms require massive amounts of data and processing power to ‘learn’ and their development is closely related to the cost of both acquiring and analyzing the datasets required. Continued development of advanced storage and processing capabilities will unlock future AI advancements.
2. AI success to date has been a mix of attempting to mimic the known architecture of the human brain (e.g., foundation of neural networks with backpropagation for most AI applications), and by playing to the strengths of machines in providing structured and consistent data sets for ‘learning’.
3. In some specific instances / applications, AI algorithms now perform better than expert humans.
4. We are at an ‘inflection point’ where AI applications are moving from very specific (e.g., a program designed specifically to identify fraud in banking transactions) to more generalized (e.g. ChatGPT, a program that allows users to ask a wide range of queries from “build me an e-commerce site including required code” through to “create a short story about a puppy named Spot”).
AI is a toolset, not a specific application, and as such has potential in almost every business process. It has evolved from long standing mathematical methodologies such as statistics and probability theories, supercharged by access to today’s massive computing resources. However, understanding where AI has evolved from is important to understand where it has the most potential as a tool to unlock value for your business.
How is AI Transforming Business’ Today?
While AI applications exist for almost any business function imaginable, below is a small example of how AI is transforming business today.
Using AI/Machine Learning, companies can analyze historical datasets to create models that optimize inputs across an impressive range of industrial processes. Especially when combined with modern Internet-of-things (IoT) sensors, AI can help optimize complex industrial processes like never before to unlock significant value for companies.
Like industrial optimization above, organizations can build probability models that predict the risk of maintenance failure, allowing for finely optimized maintenance programs and avoiding costly downtime.
Using AI architectures such as Convolutional Neural Networks, applications can analyze images to learn and recognize objects and patterns. This has a wide range of applications, including using AI to scan medical images to identify patient issues, core logic for self-driving cars, recommending fertilizer and irrigation plans in agriculture, quality control in manufacturing, and more.
AI applications can analyze massive and disparate datasets to identify subtle patterns and indicators of fraud. This supports not only the prevention of credit card or other financial fraud but also in identifying plagiarism in education and commercial applications, fake reviews, and other areas where fraud erodes trust and leads to financial and non-financial costs to companies and their clients.
AI’s ability to analyze massive and changing datasets allows it to generate effective pricing recommendations based on changing conditions. This can generate dynamic pricing for ride-share apps, parking, and fuel/energy stations, where changing conditions require a more responsive pricing strategy.
Using a combination of natural language algorithms, AI-driven chatbots have become ubiquitous, with the ability to answer many client and employee questions without requiring a human agent. With the development of large language models and generative AI applications such as ChatGPT, chatbots are becoming even more capable and difficult to differentiate between humans providing the same service.
Similar to the deployment of chatbots, Siri (Apple), Cortana (Microsoft), and Alexa (Google) are all examples of AI-driven applications that allow us to interact with technology using natural language prompts and conversations. It is the evolution towards a more ‘Star Trek’ future where we interact with our technology using verbal communication as much as physical inputs.
The applications above are only a tiny sample of how AI is leveraged in business today. It is a rapidly evolving field, and exciting new AI applications are being developed and released constantly – while this section may already need to be updated by the time you read it, some recent developments in the field include:
Generative AI (GenAI) leverages relatively new AI architectures to allow the algorithms to synthesize new content, analyzing massive amounts of training data. A key difference with GenAI applications is that while in the past, AI applications had to be trained, tuned, and focused on a relatively narrow application to be successful, the new GenAI applications are impressively flexible in their use.