Cognitive AI: the human DNA of Machines
by Kishore Jethanandani
Cognitive artificial intelligence (AI) is a step change in machine intelligence with added data from image recognition, speech recognition, video and audio recognition in consumer and enterprise network applications.
As a result, service providers will be saddled with exponentially higher data volumes spread over many more edge nodes on distributed networks, all of which makes them more susceptible to wilder traffic spikes than ever before.
Microsoft‘s consumer application for the blind, which allows them to be mobile, epitomizes the spectrum of cognitive artificial intelligence capabilities. The blind perceive objects with video recognition and receive environmental data from sensors to navigate freely. Cloud and network-hosted machine intelligence processes all of that data in the background.
Enterprise applications — boosted by APIs used by Amazon’s Alexa, among others — have focused on customer service and accelerating business processes. Boxover, for example, integrates CRM databases and speech recognition for airlines to be able to notify customers about missing bags communicated by chatbots. Information flows seamlessly on distributed networks from operations, to customer data, and onwards to a chatbot on a passenger’s smartphone.
The euphoria over chatbots in 2016 has waned as consumers are discouraged by the wrinkles in their design. Investments in natural language processing and other types of cognitive AI, however, are growing unabated. A MIT Technology Review survey found that companies currently investing in machine intelligence are focused on natural language processing (45%) and text classification and image recognition (47%), among others. For this year, text classification (55%) and natural language processing (52%) are among the top priorities for machine intelligence planners.
Cognitive AI applications that process multiple streams of data are best done in real-time on clouds and telecom networks. Lightbend has a platform designed for cognitive AI-enabled enterprise applications that is based on the Reactive Fast Data platform — built on top of Scala and Java — to address the needs of elasticity, scalability, and failure management.
“A new approach is required for developers leveraging image, voice and video data across many pattern recognition and machine learning use cases,” said Markus Eisele, developer advocate at Lightbend. “Many of these use cases require response times in the hundreds-of-milliseconds timeframe. This is pushing the need for languages and frameworks that emphasize message-driven systems that can handle asynchronous data in a highly concurrent, cloud-based way — often in real-time.”
Microservices are key to keeping pace with the multitude of variations in data flows.
“Cognitive applications are often a set of data sources and sinks, in which iterative pipelines of data are being created,” Eisele said. “They call for low latency, complex event processing, and create many challenges for how state and failure are handled. Microservices allow composability and isolation that enables cognitive systems to perform under load, and gives maximum flexibility for iterative development.”
Cognitive AI’s ability to process a broad spectrum of data and content enables it to find solutions for daunting challenges such as zero-day cyber-security threats, which are known for their sabotage of the Iranian nuclear program, that elude search engines by conducting their operations in the shadowy world of the subterranean dark net. They are spotted by microscopic classification of data and content found by scouring the Internet, with machine learning algorithms able to parse any kind of file,and ferret out those related to cyber threats and malware.
SparkCognition partnered with Google (Nasdaq: GOOG) to leverage TensorFlow, which is an interface designed to execute machine learning algorithms — including those capable of pattern recognition in cognitive data — to be able to identify threats lurking in millions of mobile and IoT devices.
“Signature based security software is updated periodically and falls short for protection against zero day or polymorphic malware,” said Joe Des Rosier, senior account executive at SparkCognition. “Our algorithm dissects the DNA of files suspected to be malicious and is deployed as a microservice to run on the endpoint [such as a mobile device] or in the cloud. Unsupervised self-learning aspects of our algorithm helps to keep pace with the changing landscape of cybersecurity.”
Manual inspection of equipment and facilities, spread geographically and in neighborhoods, is not fast enough for enterprises to make replacements in short order to avoid downtime. A customer of SparkCognition had 20,000 vending machines sending unspecified alerts with no way of separating false positives.
“Cognitive AI helps to characterize failures with visuals for parts and natural language to parse manuals,” said Tina Thibodeau, vice president of strategic alliances and channels at SparkCognition. “We use historical and real-time data to pinpoint the causes of expected failures with a long enough lead time for the customer to be able to act. Our service provider partner provides the connectivity, the data layer, and the business logic.”
Future of cognitive AI
Robust cognitive AI systems are works in progress as their information architecture is honed for perfection by learning from the false starts of early applications.
“The value of any AI technology, not only cognitive, hinges on its knowledge base, content base, and the database curated to retrieve valuable information,” said Early Information Science CEO Seth Earley. “A bot or a digital assistant is a retrieval engine, a channel, which needs an information architecture, knowledge engineering, and data integration before an AI engine can be trained to find the relevant information.”
Cognitive AI poses some unique challenges, according to Earley.
“Knowledge engineering for natural language considers its interactive nature,” he said. “When somebody has a query, the bots respond, and human agents affirm or make corrections in the classification of the intent. The learning algorithms evolve when humans edit the responses in increasing number of conversations. In speech recognition, the process starts with capturing the language of a message. Its interpretation, or the intent, follows and requires human effort.”
Earley said that deciphering accents was a part of speech recognition that improves as learning algorithms read the nuances in any expression. For video recognition, vectors and vector spaces — with clusters of the characteristics of objects — are used and people help to compare and identify them, Earley said.
The virtuous circle of adoption, improvement
Previously published by Light Reading’s Telco Transformation