The process of teaching a machine to simulate human thoughts, reasoning, and emotions is known as Artificial Intelligence or AI, wherein the machine demonstrates intelligence through agents or bots that mimic human communications, learning, and problem-solving skills. Just as with classical disciplines, AI is broken into many endeavors, such as reasoning, analysis, machine learning, and knowledge representation. Programmers have to teach a machine (machine learning) to think (reason) and then demonstrate that it understands (analyze) the concept (knowledge representation) and can work to a solution (problem solving) on its own. Independent Problem Solving is one of the key goals of AI. A second, and increasingly important goal of AI is Knowledge Generation, using speech recognition, perception, and Natural Language Generation (NLG) to create better outcomes faster and more efficiently. This can be seen in applications for Technical Support, Agriculture, and finding common medical solutions that have worked in the past for other patients with similar disease profiles. Obviously, Natural Language Processing is another key sub-discipline of AI. The AI Revolution in computing will make your business smarter in many ways, as data gathering advances alongside the increased scale in processing power driven by cloud and distributed computing initiatives.
But you cannot teach a machine to think and act like a human without first
understanding what human intelligence is. And this means that Computer
Scientists, whether they like it or not, are going to have to collaborate
with the Humanities. The Social and Anthropological disciplines
offer the best insights into what makes the human mind function, along with
Linguistics and Philosophy. The whole debate also engenders questions of
ethics: should we create artificial entities endowed with human properties like
intelligence, emotions, and choice? Clearly, automating a DevOps function with
AI is not going to give birth to SkyNet, but the groundwork of ethical choices
is still a relevant topic that will be addressed in future posts.
Intelligence, Artificial and Otherwise
There is no end to the debates, philosophical and otherwise, as to what
constitutes intelligence. For the purposes of our discussion, we will rely on a
simple definition from Philosophynow.org: “It is a
capacity to acquire, adapt, modify, extend and use information in order to
solve problems.” Therefore, intelligence is the ability to cope with
unpredictable. In other words, to take the unpredictable
and make it known and predictable.
This concept encapsulates one of the disciplines of AI, which is Predictive
Analytics, where the machine takes data and analyzes it in order to surface
trends, make them more apparent, and therefore enable predictions. At the base
of every debate is the assumption that machines are communicating with other
machines (M2M) or with humans (M2H). In this discussion, we shall first look at
how humans acquire language and communicate to exchange knowledge, and then how
computer languages are modeled on human languages and therefore essentially
work on the same structural principles.
When examining the human disciplines, a natural separation between the hard
and soft sciences exists where on one hand you have Neurology, Biophysics, and
Linguistics which study how the brain (human “hardware”) processes language and
on the other hand Communications, Sociology, Psychology and Anthropology which
study how humans use language within social context to convey knowledge.
The Language Instinct
It is in the heart of man to know and be known by others, at a minimum by at
least one other person. This pursuit of community is at the heart of Pascal’s
dilemma. (Go look it up.) Humans have the need to communicate, to share their
innermost thoughts through words, signs, and signals. This instinct, sometimes
referred to as the “Language Instinct” implies that communicating is not an
invention like writing or tools, but rather an innate capacity for sharing
thoughts and experiences. Many see the brain as a “neural network” where
language resides in one region, denoted as Broca’s and Wernicke’s areas,
supported by other functions such as the motor skills necessary to move the
mouth and tongue, reasoning skills to process high-order concepts, and so
forth. Hence the desire to simulate language in computers with circuitry,
neural networks, and programming.
The structure, or grammar of languages however is quite different and
reflects the culture in which it evolved. For example, Germanic grammar is
quite different in nature from Swahili. The ordering of words into sentences,
formal rules for writing, and the like are able to be grouped into language
families and can be taught and codified, whereas the urge to speak and
communicate is a natural part of a baby’s reasoning and development. Some
linguists posit a “universal grammar” as part of this innate ability, a debate
we will not digress into. However, suffice it to say that there is no need to
have a universal grammar to understand the difference between language as an
ability of humans and grammar as a structural arrangement of words.
Programmers fight over languages all the time some preferring Java to C++ or Python to Perl. This a debate over grammar first and nothing more. Operating systems also have grammars, witnessed by the wars between Linux aficionados, Apple fanboys, and Windows diehards. These communities of practice have agreed to use a particular way of speaking to the machine in order to give the hardware instructions. Those instructions have to be translated into a language the machine can understand. This is the job of the compiler which takes the human-readable code, such as Basic or Java and turns it into Machine Code. The machine code is then interpreted by the CPU as 1s and 0s that that the circuitry can use.
Of course, it’s far more complicated than that. If you want
to talk to the graphics card and tell it to render a particular picture on the
screen you use different commands (verbs) than if you are talking to the math
coprocessor asking it to calculate a function. You can talk to the operating
system and tell it to open a port, you can use an API to send commands and data
to other systems and programs through that port, and so forth. The
possibilities are as creative as any human communications. You just have to
know how to talk to the iron.
Based on the ability to talk to machines via code, and knowing how to parse
human speech with NLP, the goal of AI is to create agents that take care of
everyday tasks and independently operate in the background with little to no
human intervention. Scheduling appointments, answering simple questions,
alerting a robot to refill the supply line, and so forth. These may sound like
the stuff of science fiction, but each example has already been realized in the
current business climate by Amazon, Google, Tesla, and others.
S Bolding—Copyright © 2020 · Boldingbroke.com
No comments:
Post a Comment