Intelligent street lights illuminate new applications

Networked city lights with cameras and sensors read data usable for traffic management

by Kishore Jethanandani

Smart cities have evolved beyond pilot projects testing digital services to the delivery of networked digital services. Their new pivot is multi-use platforms that integrate several data streams that are leveraged to improve the city’s infrastructure.

Street lights, for example, present an opportunity to create data networks by using each pole as a node to gather data, from a cluster of local devices, and feed them to several applications and platforms for generating services.

San Diego

San Diego reflects an emerging trend of using street lights in a pervasive computing and networking system.

“The replacement of aging street lights with LED lights not only created an opportunity to significantly lower energy costs but also to gather data by making them aware of their surroundings with audio, video and environmental sensors,” David Graham, deputy COO of San Diego, said in an interview with Telco Transformation.

Smart cities are also supplementing government grants with surpluses from energy savings to be able to fund larger projects.

“Private sector companies, with legal protection from energy savings performance contracts, are willing to make the initial investments in street lighting because cities agree to share the huge savings realized from the lower energy consumption by LED lights,” Ryan Citron, research analyst at Navigant Research, told Telco Transformation.

Data gathered from HD cameras installed on street lights also has transportation applications that help to optimize the timing of traffic signals at intersections to minimize congestion.

“Currently, we have adaptive signaling for traffic flow management at 30 intersections,” said Graham. “The data from the sensors on stop lights is analyzed to decide the intervals at which stop lights change. With the use of AI, we can make the street lights more adaptive not only by events but also the length of queues, holidays and many other variables. We have been able to reduce queuing by 40% in one of the corridors.”

Network choices
The growing scope of smart cities applications has created many new possibilities in improving a city’s infrastructure, but it has also created a dilemma for city network administrators. While their ideal choice for a network is fiber optics, this option can be cost-prohibitive for the current bandwidth needs of cities. Other popular low bandwidth and cheaper networks, like SigFox, are useful for microdata but could impede the future growth of higher bandwidth smart city applications.

Furthermore, multiple applications, consuming varying volumes of data, are built on top of a common platform. The data is not only for vehicle traffic management but also smart lighting to save energy, event and emergency management, smart parking, air quality monitoring or uses as varied as easing eye strain by changing the color of LED lights, crime prevention, surveillance, predictive failure notification, etc. Flexible networking is needed to route traffic cost efficiently, and meet service quality standards for a broad variety of applications.

Some solution providers improvise with help from analytics and make do with the least possible bandwidth in the short term.

“Analytics embedded in the cameras on street lights transmit only the results of the query requested for traffic management such as counts of traffic in a specific lane. Since the traffic flow in a region is interrelated, we can use the traffic data from the queries and pre-determined correlations between them to estimate the expected impact on traffic at proximate intersections,” Sohrab Modi, CTO and SVP of Engineering at Echelon, told Telco Transformation. Accurate estimates are achieved only after training the algorithms on a great deal of data.

Flexible networks
study conducted by Navigant Research “analyzed a dozen connectivity technologies and their suitability as a smart street lighting/city platform” and identified medium-band networking solutions as the best option for balancing cost and support for the most “high-value smart city applications.”

PLC, a medium-band network, has been widely used in European countries because it provides network connections on powerlines already connected to street lights and saves on upfront capital costs. In combination with RF-Mesh, a peer-to-peer wireless network, it maneuvers around obstacles such as tall buildings. PLC is less flexible being hard-wired, but that also makes it more secure.

Narrowband options like LPWAN are very inexpensive and have long battery life but by themselves cannot serve the needs of several applications. Carriers are launching NB-IoT and LTE-Cat-M1 which provide the security of licensed spectrum while the other narrowband networks use free unlicensed spectrum. Broadband connections like 3G and 4G are ubiquitous and can serve the bandwidth needs of multiple applications. WiFi is a cheaper broadband network because it does not use licensed spectrum and it can aggregate traffic from several devices.

Smart cities can prepare themselves for their future needs by subsuming these networks into an overarching software-driven network with centralized controls. The intelligence of centralized controls will help to route traffic to any of these networks depending on the needs of individual applications and their users.

Smart cities have learned to build the foundations of intelligent services that can serve a variety of needs valuable enough for consumers to be willing to pay for them. As more services are offered on the same platform, the incremental costs of each of them decline. Software networks will keep the costs of network expansion low by making the most of the capabilities of networks already in place, and city administrators can future-proof their networks and focus on creating the environment for more innovative services.

Global multicloud webscale networks nip spikes in traffic

Global networks capitalize on heterogeneous network resources to reap applications

by Kishore Jethanandani

Heterogeneous applications and multiple clouds are characteristic of global webscale networks. Traffic flows in such interdependent networks snowball unexpectedly; spikes in application use is endemic which degrades their performance. At its worst, the failure of an application has a domino effect on the network and a catastrophic collapse ensues.

Optimization of web-scale networks irons out their many wrinkles, automates operations, and speeds up responses with predictive algorithms to preempt network outages by deploying resources fast enough to keep pace with anticipated traffic.

Emergence of web-scale networks 

The Twitter Inc. engineering team revealed the details of its redesign for web-scale operations that began following the 2010 World Cup when spikes in traffic disabled its network repeatedly for short periods of time. By August of 2013, Twitter’s infrastructure was robust enough that nothing untoward happened when the traffic surged 20 times over the normal rate during a similar Castle in the Sky event in Japan.

Twitter shifted to a virtualized and microservice-based architecture to gain flexibility. Tweets were assigned an identity so they could be stored on any storage device in a distributed network. Further improvements were made after 2014 to provide route options, distribute resources to the edge and to enable granular optimization with policy management. Similar approaches have been adopted by companies such as Google (Nasdaq: GOOG), Microsoft Corp. (Nasdaq: MSFT) and Facebook .

Investments in bandwidth alone are not enough to cope with traffic flows as they are increasing exponentially due to the growth of the Internet of Things, speech, image, and video data. Web-scale networks streamline processes to avoid local choke points and to increase the overall availability of the network with optimization.

Software-driven performance improvement 

Virtualization and microservices — along with managed services platforms — play a critical role in optimizing the network. Microservices are tools to wring out the inefficiencies by aligning processes with data flows to reduce latencies and increase availability in web-scale networks.

“Microservices are focused on building small services that provide a single piece of functionality,” said Eric Peffer, cloud consulting practice lead, World Wide Technology. “You string several of these microservices together for more advanced functionality. Platforms such as Kubernetes, Pivotal Cloud Foundry, Docker Swarm, Service Fabric and AWS Elastic Beanstalk provide the management and tooling to control the elaborate coordination of the strings of microservices. The data flows are speeded up by abstracting functionality for a series of processes that are aligned to data flows from their source to the destination.”

Software-defined networks also have flexibility in choosing the means to move traffic so that local choke points do not necessarily slow down movement. The operations needed to make or change choices can be executed automatically at the application level.

“There are services available for moving data from one application to another, such as caching, data grid services and message queuing, allowing you to adapt to changes and maintain a consistent flow of data,” Peffer said.

Intelligent network operations 

Web-scale networks interconnect multiple clouds as global enterprises extend the reach of their applications to branches and partners around the world without sacrificing performance. Enterprises want to make their applications and microservices portable across several clouds and interweave them on-demand. Enterprises with a hybrid cloud strategy rose to 58% in 2017 up from 55% in 2016.

VMware Inc. (NYSE: VMW) has built a native cloud on top of Amazon Web Services Inc. for geographical reach, availability, and flexibility. VMware’s multi-cloud management stack, along with its Cloud Foundation and NSX platforms, enables portability across clouds. The bedrock of the management platform is a policy management tool for micro-segmentation of the cloud.

Another survey found that the two most important motivations for multi-cloud strategy was more efficient workloads (73%) and more agility (69%.) Currently, one third of the enterprises want to support multiple clouds for synchronizing applications across them or for workload and data migration. For the future, 42% want most of the resources to be used for management and orchestration across multiple clouds. The intelligence and the management tools are advancing to cope with the increased complexities.

The policy management software plays a supplementary role to the traditional OSS/BSS systems.

“The policies define the parameters for security, configuration, the footprint of the application, edge or core, mini-datacenter, traditional data centers, resource use, networking performance metrics and more,” said VMware’s Gabriele Di Piazza, vice president of products and solutions at Telco NFV Group.

“OSS/BSS systems have been undergoing a significant transformation with IP-based services, which also involved data collection of application and network performance to calculate KPIs,” Di Piazza said. “Machine intelligence does dynamic analysis of data to understand the key determinants of performance to predict network behavior and performance. This is needed to reduce the mean time to repair, take proactive action to prevent failures, or to scale capacity before it falls short. Our acquisitions of Wayfront and Arcane are our investments in real-time gathering of data and predictive algorithms.”

The performance needs to be fortified at every level to maintain a consistency on global networks. Web-scale networks such as Google and AWS have tools to auto-scale in response to surges in traffic; they can spin out new instances when the traffic surges.

“Performance behavior in verticals like e-commerce have their unique characteristics which we identify with our telemetry data,” said Anand Hariharan, vice president of products, Webscale Networks Inc. “The traffic can surge by as much as a hundred times following events like celebrities posting a picture on Instagram with their products. We have written an algorithm to forecast traffic surges specifically for e-commerce to deploy more instances to keep pace with demand growth.”

Web-scale networks are buffeted by a myriad of factors at the local and global level in addition to the demands of new applications and sources of data. They have more choices of routes and network configurations for design and automation tools to adapt in the moment. Machine learning will continue to evolve as more data is gathered and innovative ways are found to direct and optimize traffic.

A version of this article was published by Light Reading’s Telco Transformation



Future cloud-native networks could speed up app development

Applications development with heterogeneous resources on networks speeds up with DevOps and containers

By Kishore Jethanandani

Future networks are going cloud-native with a wide range of ramifications for the speed of applications development. Developers will be freed up to create new solutions unencumbered by hardware constraints or even the choice of platforms.

Software-defined open service provider networks are following in the footsteps of datacenters and storage devices — they are becoming pools of network resources that can be used interchangeably by multiple clients in several regions.

In characteristic cloud-like manner, they will potentially serve a flow of variable services, in volumes and types and on-demand versus on-premise IT deployments. In this scenario, service flows are best able to move with demand currents via containers that can be added or subtracted as needed.

The heterogeneity of resources, operating systems, equipment vendors and services on telecom service provider networks is expanding as the epicenter of services delivery sprawls towards the edge in order to support the Internet of Things, big data analytics, mobile and wearable devices, and autonomous cars now and in the future. The demand for services waxes and wanes at these edge points synchronously or asynchronously. Thus, the service providers need the flexibility and elasticity of containers to scale up and out to serve a diversity of needs with resources that are not encumbered by their platforms, protocols and by hardware.

The development of applications with containers seamlessly dovetails into operations and deployments enabled by a growing range of scheduling, management, and orchestration platforms.

Containers become far more portable than virtualized machines [VMs] as they abstract from not only hardware but also from operating systems. Stateless containers go a step further than statefull and decompose the configuration and operating data of containers. The state data for configuration and operations are stored in a database and is invoked when services are generated.

Service generation with containers in the telecom world make greater demands than in the enterprise. For one, service providers are bound by the stringent quality of service contracts. Secondly, telecom companies live in a more distributed and heterogenous environment with considerable legacy resources.

Containers, workloads, and operations 
DevOps is a sequence of business processes starting with application development by a team of developers, which is followed by testing for bugs by developers. Then applications go through staging or a process of testing for the desired operating performance ending with production. Operations have been historically a valley of death for developers where many applications floundered because they could not work in its environment. Containers seek to smoothen the transition from development to production with continuous delivery methods with tools from Jenkins, ChefAnsible Inc. , Docker Inc. and Mesosphere using a variety of plug-ins.

Container images play a role in enabling a distributed team of developers to write code and use it in any environment. They automate the tedium of manually ensuring that the code works in any IT operating environment with their dependencies, such as linked databases and firewalls, and the attendant configuration from one group of developers who could be, for example using Mac, to another using Windows.

Containers on telecom networks 
Deployment of code into the production environment of a telecom service provider is an exercise in scaling while also ensuring security and quality of service. It includes the processes of clustering containers and joining them with resources and functions on the network at the desired nodes to generate a service.

New age tools like Mesos achieve scale by abstracting all network resources and functions That can be invoked by a single operating system hosted on a datacenter. Verizon is one carrier that is using Mesos for its hyperscale operations. Verizon Lab’s Damascene Joachimpillai, director of technology, explained the rationale for containers and management, as well as orchestration platforms such as Mesos, as opposed to virtualized machines.

“Most applications — IoT or otherwise — have multiple cooperating and coordinating tasks. Each of these tasks has specific resource requirements,” Joachimpillai said. “One can bundle them into a single monolithic application and provide management using a virtual machine, or deploy them independently. When one deploys these tasks as microservices, one needs a scalable resource scheduler… If they were run on bare metal, then redundancy and resiliency of the application must be considered — and one needs to provide an application management entity that monitors the health. Most of these needs and constraints are removed when using containers and an application orchestration system like Mesos.”

The production environment of a network does not only use containers, or would necessarily do it in the future, so means must be found to interlink with options such as virtual machines and physical resources, regardless of the IT environment.

“When you get into a production environment where you have workloads on physical or virtual assets or on the cloud, it is a whole new world… Instead of using multiple platforms for a diversity of workloads, we have a single platform for all of them,” Hussein Khazaal, head of marketing at Nuage Networks , said.

In the labyrinth of a network of this nature, with the sprawl growing with containers, security threats lurk and customer experience can suffer as the likelihood of failures abounds.

“We automate monitoring and responses to security events or failures through our Virtualized Security Services (VSS) features and integrations with security threat analytics solutions from our partners,” Hussein added. “VSAP [Virtualized Services Assurance Platform] can correlate failures in the underlay with impacted workloads in the overlay, so that operators can quickly identify faults and make corrections with minimal effort.”

The emerging software-driven network gains agility and flexibility by threading together several swarms of containers, virtualized and physical networks, abstracted resources and functions that are held together by data and intelligence for visibility, automated responses, and monitoring tools for failure prevention, optimization and quality assurance. Containers help by bundling together interrelated components of a larger application and making them reusable for ever-changing needs.

A version of this article was previously published by Light Reading’s Telco Transformation

Zeroing on Stuxnet-like cyber adversaries

by Kishore Jethanandani

Cyber defense is on high alert against assaults of unknown and elusive threats akin to Stuxnet that hit Iranian nuclear facilities.  Firewalls — designed for known, signature-based malware — are routinely breached.

Zero-day exploits

Alternative approaches for protecting networks against the elusive zero-day cyber attacks, AI-enabled services, and applications, exist but adversaries have found ways to subvert them. Preventive methodologies which eliminate vulnerabilities at the time of software development take management transformation before they can be implemented. 

SDN controllers are the big brothers of networks. They receive data on unusual activities from every corner of virtualized networks from sensors.  When unusual activity is detected in networks, SDN  controllers prompt actuators to take action against threats.

Finding zero-day threats, however, is a formidable challenge. Virtualized networks generate a torrent of software components with unpatched bugs — unknown vulnerabilities that hackers can exploit and go unnoticed. IoT networks and connected devices are adding another wave of software to the mix. According to a recent survey by cybersecurity firm Herjavec Group 70% of security operation centers see unknown, hidden, and emerging threats as their most challenging problem and the most desired capability they would like to acquire is threat detection (62%.)

Zero-day attacks pinpoint specific bugs leaving only small traces of their footprints. When detected, they have polymorphic chameleon characteristics to morph into new unknown versions. Network perimeters, as a result, are chronically porous.

Unsurprisingly, zero-day vulnerabilities, usually discovered accidentally during routine maintenance, peaked at 4,958 in 2014 and declined to 3,986 in 2016, according to a Symantec report. Product development processes, which incorporate security at the outset, are believed to be responsible for the fall.

Law enforcement was initially able to foil zero-day attacks by listening to conversations among cybercriminals over the darknet. Hackers have since closed this source of information.

“Cybercriminals construct their private networks to prevent law enforcement from listening to their conversations,” said Mike Spanbauer, vice president of research strategy, NSS Labs Inc. A research study by NSS Labs on breach detection systems found that five of the seven tested missed malware that evades firewalls, or advanced malware like zero-day threats, and their average effectiveness was 93%. The shortfall of 7% leaves the entire network at risk.

Living off the land

The story is no different when cybercriminals are inside of a virtualized network. They can blend into the network by acquiring credentials from the network’s administration, which is called “living off of the land” in the cyber security world. Service providers are prone to decrypting data — as illustrated by a recent FTC case — when they move it across transportation layers and provide an opportunity for intruders to sniff out credentials. They then use remote control protocols — meant for legitimate purposes such as load balancing — to maliciously control multiple VNFs.

Opportunities for deception abound in virtualized networks. For example, by masquerading as trusted authorities — such as those responsible for the quality of service — gain access to confidential information of unsuspecting victims across the network. Cybercriminals can spin virtual machines, recreated from their snapshots or images, and inject stolen identities of trusted authorities to ward off any suspicion of malicious activity.

Hackers can exploit the inconsistencies created unknowingly in interdependent systems of virtual networks. The data network, for example, is governed by the policies of the management network, and the SDN controller executes policies. Adversaries can maliciously insert fake policies in the management network, and the SDN controller unwittingly implements them.

Artificial Intelligence

In this shadowy cybersecurity world, artificial intelligence is widely touted as a means to find the clues to lurking malware. Chris Morales, head of security analytics at Vectra, said his company specializes in tracking cyber threats inside networks by analyzing data from packet headers to find patterns in communication between devices and their relationships.

“We focus on the duration, timing, frequency, and volume of network traffic,” he said. “Data on a sequence of activities point to hidden risks. An illustrative sequence that is a telltale sign of malicious activity is an outside connection initiating largely outbound data transfers and small inbound requests, together with sweeps of ports and IP addresses, searching for databases, and file servers, followed by attempts at administrative access.”

Artificial intelligence, however, is not a panacea as machine-learning algorithms have chinks that cybercriminals can exploit with their own learning algorithms. AI-augmented malware tweaks its malicious code as it learns about the detection of its earlier versions. Cybercriminals can also fob off the defending algorithms by feeding subtly misleading data (adversarial learning) such as pictures of effeminate males that are then mistakenly labeled as females.

As the cybersecurity arms race spirals ad infinitum, some industry experts are taking a step back to consider an entirely different course of action. “Hackers essentially reverse engineer code to find flaws in software and determine how to exploit them. By adopting methodologies like the secure development lifecycle (SDLC), software developers can use analytics tools to detect errors and eliminate them at the outset,” said Bill Horne, vice president, and general manager with Intertrust Technologies Corporation.

Deep Instinct’s Shimon Noam Oren, head of Cyber Intelligence, had an altogether different take on the matter. His company’s data and analytical model are designed to track unknown unknowns while current models can at best detect known unknowns.

“Data on the behavior of malware limits the training of current algorithms to known threats,” he said. “Binary sequences, the most basic level of computer programming, account for all raw data and the infinite number of combinations that are possible. Some of these sequences represent current threats, and others are possibilities open to adversaries.

“Current predictive modeling techniques in security are linear while Deep Instinct’s model is non-linear, which affords greater flexibility for the machine to autonomously simulate and predict unknown unknowns extrapolating from data on existing threats as if solving a crossword puzzle.”

The most likely scenario for the future is that improved software development methodologies will slow down the rate of increase of vulnerabilities from the current breakneck speed. Zero-defect software is improbable in the environment. Ever more sophisticated AI engines will build defenses against the remaining hidden threats.

A version of this article was previously published by Light Reading’s Telco Transformation

Cognitive AI: the human DNA of Machines

by Kishore Jethanandani

Cognitive computing lends the five senses of humans to machines

Cognitive artificial intelligence (AI) is a step change in machine intelligence with added data from image recognition, speech recognition, video and audio recognition in consumer and enterprise network applications.

As a result, service providers will be saddled with exponentially higher data volumes spread over many more edge nodes on distributed networks, all of which makes them more susceptible to wilder traffic spikes than ever before.

Applications 

Microsoft‘s consumer application for the blind, which allows them to be mobile, epitomizes the spectrum of cognitive artificial intelligence capabilities. The blind perceive objects with video recognition and receive environmental data from sensors to navigate freely. Cloud and network-hosted machine intelligence processes all of that data in the background.

Enterprise applications — boosted by APIs used by Amazon’s Alexa, among others — have focused on customer service and accelerating business processes. Boxover, for example, integrates CRM databases and speech recognition for airlines to be able to notify customers about missing bags communicated by chatbots. Information flows seamlessly on distributed networks from operations, to customer data, and onwards to a chatbot on a passenger’s smartphone.

The euphoria over chatbots in 2016 has waned as consumers are discouraged by the wrinkles in their design. Investments in natural language processing and other types of cognitive AI, however, are growing unabated. A MIT Technology Review survey found that companies currently investing in machine intelligence are focused on natural language processing (45%) and text classification and image recognition (47%), among others. For this year, text classification (55%) and natural language processing (52%) are among the top priorities for machine intelligence planners.

Service providers 

Cognitive AI applications that process multiple streams of data are best done in real-time on clouds and telecom networks. Lightbend has a platform designed for cognitive AI-enabled enterprise applications that is based on the Reactive Fast Data platform — built on top of Scala and Java — to address the needs of elasticity, scalability, and failure management.

“A new approach is required for developers leveraging image, voice and video data across many pattern recognition and machine learning use cases,” said Markus Eisele, developer advocate at Lightbend. “Many of these use cases require response times in the hundreds-of-milliseconds timeframe. This is pushing the need for languages and frameworks that emphasize message-driven systems that can handle asynchronous data in a highly concurrent, cloud-based way — often in real-time.”

Microservices are key to keeping pace with the multitude of variations in data flows.

“Cognitive applications are often a set of data sources and sinks, in which iterative pipelines of data are being created,” Eisele said. “They call for low latency, complex event processing, and create many challenges for how state and failure are handled. Microservices allow composability and isolation that enables cognitive systems to perform under load, and gives maximum flexibility for iterative development.”

Network solutions 

Cognitive AI’s ability to process a broad spectrum of data and content enables it to find solutions for daunting challenges such as zero-day cyber-security threats, which are known for their sabotage of the Iranian nuclear program, that elude search engines by conducting their operations in the shadowy world of the subterranean dark net. They are spotted by microscopic classification of data and content found by scouring the Internet, with machine learning algorithms able to parse any kind of file,and ferret out those related to cyber threats and malware.

SparkCognition partnered with Google (Nasdaq: GOOG) to leverage TensorFlow, which is an interface designed to execute machine learning algorithms — including those capable of pattern recognition in cognitive data — to be able to identify threats lurking in millions of mobile and IoT devices.

“Signature based security software is updated periodically and falls short for protection against zero day or polymorphic malware,” said Joe Des Rosier, senior account executive at SparkCognition. “Our algorithm dissects the DNA of files suspected to be malicious and is deployed as a microservice to run on the endpoint [such as a mobile device] or in the cloud. Unsupervised self-learning aspects of our algorithm helps to keep pace with the changing landscape of cybersecurity.”

Manual inspection of equipment and facilities, spread geographically and in neighborhoods, is not fast enough for enterprises to make replacements in short order to avoid downtime. A customer of SparkCognition had 20,000 vending machines sending unspecified alerts with no way of separating false positives.

“Cognitive AI helps to characterize failures with visuals for parts and natural language to parse manuals,” said Tina Thibodeau, vice president of strategic alliances and channels at SparkCognition. “We use historical and real-time data to pinpoint the causes of expected failures with a long enough lead time for the customer to be able to act. Our service provider partner provides the connectivity, the data layer, and the business logic.”

Future of cognitive AI 

Robust cognitive AI systems are works in progress as their information architecture is honed for perfection by learning from the false starts of early applications.

“The value of any AI technology, not only cognitive, hinges on its knowledge base, content base, and the database curated to retrieve valuable information,” said Early Information Science CEO Seth Earley. “A bot or a digital assistant is a retrieval engine, a channel, which needs an information architecture, knowledge engineering, and data integration before an AI engine can be trained to find the relevant information.”

Cognitive AI poses some unique challenges, according to Earley.

“Knowledge engineering for natural language considers its interactive nature,” he said. “When somebody has a query, the bots respond, and human agents affirm or make corrections in the classification of the intent. The learning algorithms evolve when humans edit the responses in increasing number of conversations. In speech recognition, the process starts with capturing the language of a message. Its interpretation, or the intent, follows and requires human effort.”

Earley said that deciphering accents was a part of speech recognition that improves as learning algorithms read the nuances in any expression. For video recognition, vectors and vector spaces — with clusters of the characteristics of objects — are used and people help to compare and identify them, Earley said.

The virtuous circle of adoption, improvement and redesign of applications has begun for cognitive AI. While still far from perfect, there is enough interest to advance its commercial viability.

Previously published by Light Reading’s Telco Transformation

5G: Customized services and apps at the edge

Computing at the edge

by Kishore Jethanandani

Computing at the edge
5G radio technologies at the edge

5G’s raison d’être is to quickly provision real-time services and applications across end-to-end networks to customers and business in various locations. While that seems to be a tall order, the technologies are coalescing to deliver on the 5G promise of faster, better and new services.

Adoption of 5G applications

While 5G applications are currently in the pilot stages, their adoption is expected to accelerate as a result of the demonstration effect from the 2018 Winter Olympics early next year in South Korea.

Barry Graham, director of product management, at the TM Forum, asserts that it’s not all about the future. The industry has prioritized and enhanced the broadband aspect of 5G, which is more familiar to service providers. Mission-critical applications such as autonomous cars are further away on the horizon while massive machine-type communications are even farther down the road, but the market size for the latter is expected to be in trillions of dollars. When its deployed, machine communications will be spread over hundreds of potential applications and the customized versions will be attractive because of the prospect of higher profitability.

“Internet of Everything applications are characterized by heterogeneity and a diversity of platforms,” Graham said. “They will benefit the most from a robust and pervasive cloud-native environment that can scale up and down.”

For a pervasive cloud-native environment, “the industry needs standards for the communication between the edge and the mobile edge when traffic does not have to be directed to the central cloud,” according to Graham.

“While making the best of their existing investments, CIOs should begin to plan for the Internet of Everything for use cases such as micropayments,” he concluded.

Edge Cloud

The edge cloud, enlarged with the incorporation of a cloudified RAN, will be transformed to meet the individual performance needs of 5G-enabled applications. Services can be tailor-made for customers and delivered in real-time by placing all or most of the elements for service composition — such as VNFs, virtualized resources, microservices, management and orchestration software, a cloud-native infrastructure that includes the SaaS. IaaS, PaaS, and Cloud-RAN — in close proximity to customers at the edge.

The unknown at the moment is the economic feasibility of the edge clouds. “Clouds are centralized for a reason; their economic returns are known. For edge clouds, network operators and the industry must now work hand-in-hand to develop financially feasible use-cases together,” said Franz Seiser, vice president of Core Network and Services at Deutsche Telekom.

A step change in the expected performance of 5G undergirds the confidence to adapt to market changes as they happen. Compared to the latency of 15-20 milliseconds for 4G LTE, 5G will be below 4 milliseconds for broadband and as low as 0.5 milliseconds for mission-critical applications. 5G has actual throughput speeds of 500 Megabits per second to 5 G/bits while 4G LTE’s throughput ranges from 6.5 M/bits to 26.3 M/bits.

Service providers as master orchestrators

 
Service providers will play a pivotal role in orchestrating all of the elements needed for customizing services, which they will source from the stakeholders in ecoystem at the edge of their networks.

The managed service provider’s role is build an end-to-end virtual network slice based on the needs of the customer. The managed service provider is brokering the resources from one entity to another while also insuring that service level agreements are being met.

While network slicing has vast potential for enabling specific services and applications on the fly, service providers need to ensure that they have insight into the end-to-end framework including inter-slice management.

“ONAP is one potential candidate for a framework to manage cloudified networks, including network slicing,” Duetsche Telekom’s Seiser said.

Network slicing 

Network slicing provides the means to customize business solutions for verticals based on the performance needs spelled out by the SLAs signed with customers. “5G system aims to provide a flexible platform to enable new business cases and models… network slicing emerges as a promising future-proof framework to adhere by the technological and business needs of different industries,” according to a European Union document.

The keystone of network slicing and its design challenge is “network slicing needs to be designed from an end-to-end perspective spanning over several technical domains (e.g., core, transport, and access networks) and administrative domains (e.g., different mobile network operators) including management and orchestration plane,” according to the European Union document.

On the stretch from the core to the access networks, each operator is sharing network capacity with peers or multiple tenants for the delivery of one or more of the portfolio of services, which saves on capital expenditures that need to be minimized in order to make investments on customized services more viable.

Industrial automation, such as collaborative robotics — which includes humans interacting with robots — is a vertical where network slicing is expected to gain acceptance.

“The latency and throughput requirements of collaborative robotics vary within a factory floor, or across an industrial region, and network slicing flexibly tailors network services for each of them” said Harbor Research analyst Daniel Intolubbe-Chmil.

Network slicing end-to-end — from the core to customers’ facilities — will become possible with the reliability, flexibility and the desired throughput (collectively described as network determinism) that cloudified RAN networks will be able to deliver.

“In conjunction with mmWave technologies, they will support throughput comparable to Ethernet. Beamforming helps to achieve high reliability that maintains the quality of the signal end-to-end. Small cells lend greater flexibility and speed in deployment compared to the Ethernet,” Intolubbe-Chmil said.

SK Telecom deployed its first cloud RAN in the third quarter of last year in order to support a rollout of LTE-A, which is pre 5G, while also preparing a base for its upcoming 5G platform, according to Neil Shah, research director at Counterpoint Research.

“Network operators are ready to scale cloud RANs as they have greater clarity on the right mix of macro cell sites and small cells controlled by cloud baseband units,” Shah said.

Virtual reality 

Virtual reality looks to be the leading application of choice in the 5G environment. The countdown for its mass use has started as the organizers of the 2018 Winter Olympics next year in South Korea engineer their stadiums for 5G-enabled virtual reality even before the 5G standards are finalized.

Sports fans will be delighted with 360 degrees virtual reality that includes the option to view footage from their preferred angle such as from a sportsperson’s head-worn camera or a drone with a more panoramic view. They will also be able to switch streams from one sports arena to another with virtual reality.

“An edge cloud placed in the sports stadium, meeting the processing demand from VR streams flowing to hundreds of fans, lowers latency,” Seiser said. “Bandwidth is used efficiently when the data rate is reduced by delivering only a sliver of a VR stream needed to render content for the field of view of the user,” Seiser explained.

With a constellation of services and applications at the mobile edge, 5G cloud-native architecture are starting to converge to make widespread customization of services possible for service providers and their end customers.

Mesh networks open IOT’s rich last mile data seams for mining

By Kishore Jethanandani

Mesh networks (or the alternative star topology networks connecting devices to routers) afford the mining of data in IOT’s last mile. By interconnecting mobile devices, mesh networks can funnel data from sensors to local gateways or the cloud for analysis and decision-making.

Wired and wireless telecom networks do not reach the distant regions or the nooks and crannies for the mining of information-rich seams of data. Mining, oilfields, ocean-going vessels, electricity grids, emergency response sites like wildfires, and agriculture are some of the information-rich data sources rarely tapped for analytics and decision-making in real-time or otherwise.

Where telecom coverage is available, it does not necessarily reach all assets. Data generated by sensors embedded in equipment on factory floors, underground water pipes in cities, or inventory in warehouses cannot readily access cellular or wired networks.

A typical case of a remote field site is that of an oil exploration and production in California with dispersed wells where ten operators gathered data on tank levels, gas flows, and well-head pressures. Now with a mesh network, operating managers can access this data anywhere and respond to alerts in real-time.

Onsite mesh networks are deployed for microscopic monitoring of equipment to watch for losses such as energy leakages. Refineries are labyrinths of pipes with relief valves to release pressure to avoid blow-ups. One of them in Singapore had one thousand valves to monitor manually. These valves do not necessarily shut tightly enough, or need maintenance and gases trickle out. Over time, the losses add up to a lot. Acoustic signals can pinpoint otherwise unnoticeable leakages and transmit the data via mesh networks to databases; any deviation from pattern prompts action to stop the losses.

The prospects of on-premise mesh networks adoption have improved with the emergence of smart Bluetooth and beacons. With smart Bluetooth technology, an IP layer is built on top of the data layer for ease of connecting devices. Beacons are publicly available for anyone to use for building networks.

We spoke to Rob Chandhok, the President and Chief Operating Officer at San Francisco-based Helium Systems Incorporated, to understand his company’s approach to mining the data in IOT’s last mile. Helium’s current solutions target the healthcare industry and in particular its refrigeration and air conditioning equipment. “Hospitals have hundreds of refrigerators to store medicines which are likely to be damaged if the doors are inadvertently left open,” Rob Chandhok explained to us.

The touchstone of Helium’s technology is its programmable sensors embedded with a choice of scripts capable of rudimentary arithmetic like calculating the difference in temperature between two rooms. As a result, the sensors generate more data than would be possible with the investment in dumb hardware alone. Helium uses star topology for the last mile network connected to the cloud which hosts a platform for analytical solutions. The entire system is configurable from the sensor to the cloud for generating data for the desired thresholds and alerts or analytical models.

“The architecture is intended to optimize processes across the system,” Rob Chandhok told us. He illustrated with an example of the impact of pressure differences; germs are less likely to enter if the internal pressure is higher than the external pressure.

Configurable sensors help to tailor a solution to the outcome desired. Vaccine potency is the greatest if the temperature stays in the 2-8 degrees centigrade (35.6F-46.4 F). By contrast, cell cultures are rendered useless, and thousands of dollars lost, if the temperature falls out in the range of 37 degrees (plus or minus 0.5) centigrade.

In the hospitality industry, the purpose is to improve customer service by keeping temperatures in range to minimize discomfort. Guests do not have to wait until air-conditioning brings temperatures to the desired levels which vary by region and seasons.

The roster of solutions expands as Helium learns more about the clients’ problems. In the course of working with customers in hospitals, Helium was made aware of the routine underutilization of beds. Speaking of future plans, Rob Chandhok said, “We can improve the rate of utilization of beds in hospitals with automatic and real-time tracking with infrared sensors.”  Currently, nurses manually record the state of occupancy of beds usually with a lag. “Nurses are unable to keep pace as patients are moved after short intervals before and after operations,” Rob Chandhok explained.

For a field site application, we spoke to Weft’s Ganesh Sivaraman, Director of Product Management,
as well as Erin O’Bannon, Director of Marketing, about its supply chain solutions. The company uses mesh networks to determine the location and condition of cargo on ships, their expected time of arrival, the inflow of cargo once ships are docked at ports, and the extent of port congestion. “The mesh network brings near real-time visibility to the state of flow of the cargo,” Ganesh Sivaraman told us. However, he clarified that currently, its applications
do not afford the tracking of cargo at the pallet level and their flow in the supply chain. “We use predictive analytics, using proprietary and third-party data sources, to help clients time the deployment of trucks to pick the cargo with minimal delay,” Erin O’Bannon told us. “With visibility, clients can anticipate delays, which lets them plan for alternative routes for trucks to shorten delivery times or switch to air transportation if the gain in time is worth the cost,” Erin O’Bannon explained.

Mesh networks will evolve from vertical to an array of horizontal solutions. Home automation, for example, will likely be linked with fire prevention services and with the connected cars of homeowners. Analytics companies can potentially create duplicative infrastructure left to themselves. We spoke to Shilpi Kumar of Filament,  a company specializing in mesh connectivity for industries, to understand how this evolution will shape the architecture of last mile IOT networks. “Decentralized mesh infrastructure-as-a-service serves the needs of multiple analytics companies with network policies enforced by blockchain-based contracts,” Shilpi Kumar, Product Development, told us. “The interlinking of mesh networks with a secure overlay prepares the way of exchanges between devices in an ecosystem such as vehicles paying for parking automatically,” Shilpi Kumar revealed.

Mesh networks expand the universe of the Internet of Things by making remote data sources accessible. They also raise the level of granularity of data sources that are nominally reachable with existing networks. As a result, these mesh networks expand the array of opportunities for optimizing business processes.

Fog Computing: Bringing cloud vapors to the IOT fields

Sensor data creates needs for local analytics that fog computing serves

By Kishore Jethanandani

Fog computing has arrived as a distinct class of customized solutions catering to local analytical needs in physical ecologies that constitute the Internet of Things. Sensors stream vast volumes of data, from field sites like wind farms, expeditiously processed only in the vicinity and their actionable intelligence is intuitive to local decision-makers. Cloud analytics, by contrast, delays data flows and their processing far too long and loses its value.

Bringing Analytics close to sensor data

The configuration and customization of fog computing solutions address a heterogeneous mix of speed, size, and intelligence needs. An illustrative case is of Siemens’ gas turbines that have five thousand embedded sensors, in each of them, which pour data for storage in databases. Data aggregated locally helps to compare performance across gas turbines, and this is done at the moment as sensors stream live data that can be analyzed to act instantaneously.

An entirely different situation is intelligent traffic lights that sense the beams of the light of incoming ambulances and clear the way for them while alerting other vehicles ahead to reroute or slow down before choking the traffic. In this case, the data analysis spans a region.

Time is of the tactical essence with the users of information generated by sensors and connected devices. A typical case of an application is the extent of use of high-value assets such as jet engines; a breakdown could have a spiral effect on the scheduling of flights. Sensors generate data every second or milliseconds that need to be captured and analyzed to predict equipment failures at the moment.  The volumes of data are inevitably massively large and delays in processing intolerable. Fog analytics slashes the time delays that are inescapable with cloud computing, by parsing the data locally.

Users have expressed keen interest in gathering and analyzing data generated by sensors but have reservations about the technology and its ability to serve their needs. A study completed by Dimensional Research in March 2015 found that eighty-six percent reported that faster and more flexible analytics would increase their return on investment. The lack of conviction about analytics technologies is palpable by the fact that eighty-three percent of the respondents are collecting data but only eight percent capture and analyze data in time to make critical decisions.

The value of data

We spoke to Syed Hoda, Chief Marketing Officer, of ParStream, a company that offers an analytics database and a platform for real-time analytics, on data volumes as large as Petabytes of IoT data, to understand how new breakthroughs in technology help to extract value from it.

ParStream’s technology helps companies gain efficiencies from IoT data which is event specific. The productivity of wind turbines, as measured by electricity generated, is higher when their velocity is proportionate to the speed of flow of winds which is possible when their blades do not buck the wind direction. “By analyzing data, at once, companies can get better at generating actionable insights, and receive faster answers to what-if questions to take advantage of more opportunities to increase productivity,” Syed Hoda told us.

ParStream slashes the time of data processing by edge processing, at the gateway level, rather than aggregate it centrally. It stores and analyzes data on wind turbines, for example, at the wind farm. Numerical calculation routines, embedded in local databases, process arrays of live streams of data, instead of individual tables, to flexibly adjust to computation needs.

Unstructured data and quality of service

We spoke to three experts, who preferred to remain anonymous, employed by a leading company in fog computing about the state of the technical and commercial viability of IOT data-based analytics. They do not believe that impromptu learning from streaming data flowing from devices in the Internet of Things. In their view, the IOT affords only preconfigured inquiries such as comparing the current data to historical experience for purposes such as distinguishing an employee from intruders.

In their view, analytics, in local regions, encompass applications that need unstructured data such as image data for face recognition which are usable with the consistent quality of service. In a shared environment of the Internet of Things, a diversity of demands on a network are potentially detrimental to service quality. They believe that new processors afford the opportunity to dedicate the processing of individual streams of data to specific cores to achieve the desired quality of service.

Fog computing applications have become user-friendly, as devices with intuitive controls for functions like admitting visitors or to oversee an elevator are more widely available. The three experts confirmed that solutions for several verticals have been tested and found to be financially and operationally workable and ready for deployment.

Edge Intelligence

Another approach to edge intelligence is using Java virtual machines and applets, for intelligence gathering, and for executing controls. We spoke to Kenneth Lowe, Device Integration Manager, at Gemalto’s SensorLogic Platform about using edge intelligence for critical applications like regulating the temperature in a data center. “Edge intelligence sends out an alert when the temperature rises above a threshold that is potentially damaging to the machines while allowing you to take action locally and initiate cooling, or in the worst case, shut the system down without waiting for a response from the cloud,” Kenneth Lowe told us. “The SensorLogic Agent, a device-agnostic software element, is compiled into the Java applet that resides on the M2M module itself.  As sensors detect an event, the Agent decides on how to respond, process the data locally, or send it to the cloud for an aggregated view,” Kenneth Lowe explained.

Java virtual machines help to bring analytics from the cloud to the edge, not only to gateways but all the way to devices. We spoke to Steve Stover, Senior Director of Product Management at Predixion Software, which deploys analytic models on devices, gateways and in the Cloud. The distribution of analytics intelligence to devices and gateways helps to function in small or large footprints and disconnected or connected communication environments.

“We can optimize wind turbine performance in real time by performing predictive analytics on data from multiple sensors embedded on the individual turbine in a wind farm,” Steve Stover told us. “Orderly shutdowns prompted by predictive analytics running on the gateway at the edge of the wind farm helps to avoid costly failures that could have a cascading effect,” Steve Stover told us.

Similarly, analytics on the cloud can compare the performance of wind farms across regions for purposes of deciding investment levels in regional clusters of wind farms.

Fog computing expands the spectrum of analytics market opportunities by addressing the needs of varied sizes of footprints. The geographical context, the use cases, and the dimensions of applications are more differentiated with fog computing.

Previously published by All Analytics of UBM TecWeb

Knowing the unknown by digging deep

Kishore Jethanandani

Deep learning, referred to as neural network algorithms, is a lot like solving a crossword puzzle–the unknowns in gargantuan data stores are knowable only by their relationships with the known. Unsupervised deep learning goes further and does not presume, at the outset, any knowledge of the interdependencies in the data.

Supervised deep learning is analogous to searching for an undersea destination like an oil well with the knowledge of the coastline alone. It reads the known relationships in the geophysical data in the layers underneath the seashore to reach, progressively, the oil well. Unsupervised learning first establishes whether a relationship exists between the contours of the coastline and the subterranean topography.

We spoke to Dr. Charles H Martin, a long-time expert in machine learning and the founder of Calculation Consulting, about the prospects for enterprise applications of supervised and unsupervised deep learning. “Many in the business world recognize the vast potential of applications of deep learning and the technology has matured for widespread adoption,” Dr. Martin surmised. “The most hospitable culture for machine learning is scientific and open to recurring experimentation with ideas and evolving business models, the legacy enterprise fixation on engineering and static processes is a barrier to its progress,” Dr. Martin underscored.

Unstructured data abounds, and the familiar methods of analyzing them with categories and correlations do not necessarily exist. The size and variety of such databases can elude modeling. These unstructured databases have valuable information like social media conversations about brands, video from traffic cameras, sensor data of factory equipment, or trading data from exchanges that are akin to finding a needle in a haystack. Deep learning algorithms find the brand value from positive and negative remarks on social media, elusive fugitives in the video from traffic cameras, the failing equipment from the factory data, or the investment opportunity in the trading data.

“Unsupervised deep learning helps in detecting patterns and hypothesis formulation while supervised deep learning is for hypothesis testing and deeper exploration,” Dr. Martin concluded.  “Unsupervised deep learning has proved to be useful for fraud detection, and oil exploration—anomalies in the data point to cybercrime and oil respectively,” he explained. “The prediction of corporate performance using granular data such as satellite imagery of traffic in the parking lots of retail companies is an example of the second generation of supervised deep learning,” Dr. Martin revealed.

Early detection of illnesses from medical imaging is one category of problems that deep learning is well suited to address. Citing the example of COPD (Chronic Obstructive Pulmonary Disease), Dave Sullivan, the CEO of Ersatz Labs, a cloud-based deep learning company based in San Francisco, told us, “the imaging data shows nodules and not all of them indicate COPD. It is hard for even a trained eye to tell one from another. Deep learning techniques evolve as they are calibrated and recalibrated (trained) on vast volumes of data gathered in the past, and they learn to distinguish with a high degree of accuracy for individual cases.”

Clarifai has democratized access to its deep learning with its API, which allows holders of data to analyze and benefit from the insights.  We spoke to Matthew Zeiler, the CEO and Founder of Clarifai, to understand how its partners use the technology.  One of them is France-based i-nside.com, a healthcare company, which employs smartphones to conduct routine examinations of the mouth, ear, and throat to generate data for diagnosis. “In developing countries where doctors are scarce, the analysis of the data points to therapies that are reliable,” Zeiler told us. “In developed countries, the analysis of the data supports the judgment of doctors, and they have reported satisfactory results,” Zeiler added.

Enterprise is not the only place where deep learning has found a home—consumer applications like Google Now, Microsoft’s Cortana and Assistant are available in the market. Folks are often anxious and distracted, at work or play, when they are unable to keep track of critical events that could affect them or their family. Home surveillance watches pets, the return of young children from school, elderly relatives falling, the arrival of critical packages and more. What matters is an alert on an unusual event. Camio uses the camera of a handheld phone or any other home device like a computer to capture video of happenings at home. When something irregular happens, IFTTT sends alerts.

Deep learning mimics the neurons of the brain to sift the meaningful relationships otherwise lost in the clutter of humongous streams or data stores. Machines can do it faster when the correlations are known. When they are unknown, it helps to discern the patterns before deciding to invest time in deeper investigations.

 

Cyber-detectives chase cyber-criminals armed with Big Data

by Kishore Jethanandani

Cyber-security in enterprises is caught in a dangerous time warp—the long held assumption that invaluable information assets of companies can be cordoned off within a perimeter, protected by firewalls, no longer holds. The perimeter is porous with its countless access points available to a mobile and distributed workforce, and partners’ networks, with remote access rights to corporate data via the cloud.

Mobile endpoints and their use of the cloud for sharing corporate data have been found to be the most vulnerable conduit that cyber-criminals exploit for launching the most sophisticated attacks (advanced persistent threats) intended to steal intellectual property. Poneman Institute’s survey of cyber-security attacks, over twenty four months, found that 71 percent of companies reported that endpoint security risks are the most difficult to mitigate. The use of multiple mobile devices to access the corporate network was reported to be the highest risk with 60 percent reporting so. Another 50 percent considered the use of personal mobile devices for work related activity to be the highest risk. The second most important class of IT risks was considered to be thirty-party cloud applications with 66 percent reporting so.  The third most important IT risk of greatest concern was reported to be Advanced Persistent threats.

In an environment of pervasive vulnerabilities, enterprises are learning to remain vigilant about anomalous behavior pointing to an impending attack from criminals. “Behavioral patterns that do not conform to the normal rhythm of daily activity, often concurrent with large volumes of traffic, are the hallmarks of a cyber-criminal,” Dr. Vincent Berk, CEO and co-founder of Flowtraq, a Big Data cyber-security firm that specializes in identifying behavioral patterns of cyber-criminals, told us.  “A tell-tale sign of an imminent cyber attack is unexpected network reconnaissance activity,” he informed us. Human beings need to correlate several clues emerging from the data analysis before drawing conclusions because criminals learn new ways to evade surveillance.

Enterprises now recognize the importance of learning to recognize the “fingerprints” of cyber-criminals from their behavior. A 2014 survey by PriceWaterHouseCooper found that 20 percent of the respondents see security information and event management tools as a priority and an equal number event correlation as a priority. These technologies help to recognize behavioral patterns of cyber-criminals.

“Scalability of Big Data solutions to identify behavior of cyber-criminals is the most daunting challenge.” Dr. Vincent Berk told us. “We extract data from routers and switches anywhere in the pathway of data flows in and out of the extended enterprise,” he explained to us. “The fluidity of enterprise networks today with increasing virtualization and recourse to the cloud makes it challenging to track them,” he informed us. “Additionally, mergers and acquisitions add to the complexity as more routers and switches have to be identified and monitored,” he explained to us.

Dr. Berk underscored the importance of avoiding false positives which could lead to denial of access to legitimate users of the network and interruption of business activity. “Ideally, we want to monitor at a more granular level, including the patterns of activity on each device in use, and any departures from norm to avoid false positives,” he told us. The filter of human intelligence is still needed to isolate false positives.

“Granular monitoring is more accurate and has uncovered sophisticated intruders who hide inside virtualized private networks (VPNs) or encrypted data flows,” Dr. Berk revealed to us. Often, these sophisticated attackers have been there for years unnoticed. “The VPNs and the encryption are not cracked but the data is analyzed to understand why they are in the network,” Dr. Berk explained to us.

Cyber-security will increasingly be a battle of wits between intruders and the victims. Big Data analysis notwithstanding, cyber-criminals will find new ways to elude their hunters. The data analysis will provide clues about the ever changing methods used by cyber-criminals and means to guard against their attacks. The quality of human intelligence on either side will determine who wins.

 

The long arm of the law extended by technology

Law enforcement faces daunting challenges and many cases go cold. Its loses the trail when it pursues criminals into the woods, fugitives are often elusive as they flee from the scene of the crime, clues don’t tell enough of a story, information clutter is overwhelming, or identification is not accurate enough.

New technologies are coming to the aid of law enforcement. Infrared and thermal imaging helps to track down criminals in the dark, analytics can trace the data footprints of fugitives, 3D imaging, laser enabled, helps to recreate the scene of the crime, augmented reality presents relevant information in the line-of-sight, and biometrics offers a variety of ways to identify criminals.

In the radio program linked below, I discussed these issues with Kim Davis, the former editor of UBM’s Future Cities

 

smart-detective

Infrared mobile devices: light under the cover of darkness

By Kishore Jethanandani

Consumer mobile devices are extending their reach into the enterprise, fulfilling more than communication needs of distributed workforces, as they are incorporated into business processes. A bevy of companies have launched infra-red mobile devices to supplement or compete with far more expensive infra-red cameras that historically have been used for specialized, high value applications.

Mass use of inexpensive infrared mobile devices in the enterprise meets a variety of operational needs to increase productivity and minimize risk. Impending equipment failures, indicated by cracks, are invisible to the human eye but can be detected by infrared devices. Additionally, intruders hiding in dark corners are spotted by infrared devices. Leaks and attendant energy losses, unnoticed by the naked eye, are visible to infrared devices.

Infrared devices have a unique ability to discern objects that the eye cannot. Intruders, for example, are detected by reading the differences in body and room temperature.

Among the new entrants is an Israeli company, Opgal, which has implemented Therm-App for law enforcement agencies and will expand its markets to private security firms, construction, and more. In collaboration with Lumus, it is also offering an equivalent wearable option with the ability to display thermal images in the center of an eyeglass.

Market leader, FLIR, which has a long history in infra-red cameras, has announced FLIRone, a camera that it is reportedly going to be used with Apple mobile devices.

Scotland-based Pyreos has launched a low-power mobile device for applications such as sensing noxious gases in industrial plants.

Opgal expects to compete with the entrenched incumbent, FLIR, by “offering an open platform for applications development over the cloud, with added benefits of much higher resolution (384*288) and depth of field than is possible with existing handheld devices to have adequate room to play for continuous development of new applications.” Mr Amit Mattatia, CEO of Opgal told us.

Opgal has gained significant traction in law enforcement by helping policepersons become more effective by piercing the veil of darkness with infrared. According to a FBI study, the largest number of deaths of law enforcement officers took place between midnight and 2 am in the morning. The fatalities happen when officers are in pursuit of fugitives outside of their vehicles. Officers are prone to injury or to lose their way in the darkness and are hard to find when they do. “Officers wear Opgal’s mobile device on their body and their pathway is traced by infra-red and communicated to a remote officer on a local map which can also be kept as a record for court proceedings,” Mr Amit Mattatia, CEO of Opgal told us.

Experienced professionals in the industry, long-time users of infra-red cameras, expressed skepticism about the ability of mobile devices to progress beyond some rudimentary applications. “A building inspector, for example, can detect a problem such as a wet spot due to a leakage but the resolution of images captured with mobile devices is not sharp enough to find its cause or source,” Gregory Stockton of Stockton Infrared, a company based in Randleman, North Carolina, told us. “Opgal’s Therm-App is an exception with its high resolution and depth of field but its price at four thousand dollars is comparable to proven and integrated alternatives like the FLIR E-8,” Mr. Stockton remarked. “The utility of most other mobile devices will be to detect problems before professional alternatives are sought for diagnosis and solutions,” Mr. Stockton concluded.

The services of professionals are needed for their specialized skills in any one of the varying types of infrared imagery.  “Broadly, the infrared imaging market is divided between the short-wave, mid-wave and the long-wave. The short-wave and midwave is largely confined to the military and requires very large investments while most professional imaging for the enterprise happens in the long-wave,” Mr. John Gutierrez of Infrared Thermal Imaging Services told us. “It takes a trained eye to detect the source of problems, by interpreting infrared images, such as those showing gas leaks in buildings, and the data on temperature differences and air movement,” Mr. John Gutierrez explained to us. According to him, none of this can be done with a lay user of mobile devices with infrared cameras but their widespread availability raises awareness about thermal imaging and the solutions possible with them.

The last word on the interplay of mobile devices and independent infrared cameras can hardly be said at this point in time. Mobile devices have been undergoing rapid transformation with improving capabilities in image capture and prolific applications development. Infrared cameras are more robust as hardware but users are more likely to be sensitive to their price as they weigh the alternative of a mobile device. One thing is certain—the market for thermal imaging will not be the same.

 

The post was previously published by the now defunct Mobility Hub of UBM Techweb

 

 

Enterprise mobile apps: carriers help overcome geography

By Kishore Jethanandani

Mobile broadband carriers are discovering their unique latent strengths in provision of mobile enterprise platforms and applications stretching across multiple geographies. They see in the provision of a seamless mobile experience, for nationwide or global enterprises, an opportunity to increase their margins.

Enterprises tend to use a variety of operating systems and enterprise software in each of their geographical locations or even functional departments. An increasingly mobile workforce does not want to be hamstrung by backend systems unable to communicate with each other. AT&T offers the Me Mobile Enterprise application platform which provides a consistent mobilization experience across multiple platforms and adapts to new ones that emerge every so often.

Frank Jules, the President of AT&T’s Global Solutions explained, at the Morgan Stanley TMT Conference in November of 2012, that he sees in his company’s global network the bedrock to host and integrate platforms for cloud, mobile, enterprise software from Oracle and SAP, security and applications.  AT&T announced at the conference a new investment program Project Velocity, IP that will invest a billion dollars in enterprise solutions of a total investment of 14 billion dollars.

McDonalds, one of AT&T’s global clients, wanted to replace its static menu boards with digital signage that would refresh data frequently with information on prices, meals and disclosures. AT&T has a hot spot in almost all outlets of MacDonald’s around the world. It built a portal to manage the delivery of data from a central point for display of changing information in all McDonald’s restaurants. It collaborated with another company for the display boards.

In its effort to capitalize on a growing mobile cloud business, Verizon consolidated its Verizon Business, Terremark Cloud business, Cybertrust, the global wholesale business, and the enterprise and government business from Verizon Wireless into a single Verizon Enterprise Solutions Business and reoriented it to address the needs of eight verticals rather than geographies explained Mr. John Strutton, President of Verizon Enterprise Solutions, at the Jefferies Global Technology, Internet, Media & Telecom Conference in May 2012. In machine-to-machine communications, it sees a major opportunity to increase margins with professional services for integration of hardware, software and networks.  The contract value of such projects is 2.5 to 3 times more than the transportation business for comparable investments in effort and capital.

Verizon was able to solve an intractable problem for the railway business with its packaged solutions. Twenty-two of its long-line railway customers were faced with a formidable challenge to comply with a regulatory stipulation to stop or slow down a train should a driver become unexpectedly unconscious. Verizon, in collaboration with several partners, was able to provide a solution for remote monitoring of railway operations for managing the speed of trains.

Enterprises, unlike the mobile consumer market, are not going to be satisfied with point solutions. Instead, they need solutions able to complete a series of business processes. Carriers are well positioned to serve them with their extensive networks and deep pockets to invest in integrated solutions.

This post was previously published in the Innovation Generation of the UBM Techweb

Indiana Jones in the age of analytics

By Kishore Jethanandani

Drilling engineers navigate hazardous oil wells in earth’s dark hollows not in the manner of the swashbuckling Indiana Jones but collaboratively with staid geophysicists and geologists who parse terabytes of data to calculate the risk of the next cascade of rocks or an explosion. 3D visuals of seismological data, superimposed with sensor fed real-time data, help offsite professionals to collaborate with engineers working at well-sites.

The drilling machines also generate streams of data with an array of sensors used for well logging. The data is transmitted to remote sites where it is aggregated and is accessible to geologists and geophysicists.  These sensors can read geological data such as hydrocarbon bearing capacity of rocks as well as the data related to the rig operations such as the borehole pressure, vibrations, weight on the drill and its direction and much more all in the context of the well environment.

A major breakthrough has been achieved with the ability to pool data coming from a variety of sources in a single repository with help from standards under the rubric of Wellsite Information Transfer Specification (WITS).  The availability of a storehouse of well data opens the way for a bevy of firms, specialized in reading the geological and operations data, to find patterns and guide engineers to find the most optimal ways to explore and produce oil in complex wells in the depths of the earth and the oceans.

A typical case of oil and exploration companies encountering Catch-22 situations is that of PEMEX which ran into the dead end of a salt dome underground. The alternative was to circumvent the dome.  The rub was the risk of getting into a quagmire of mud. A game plan was crafted over eight months in collaboration with a multidisciplinary staff that included preliminary testing, 3D modeling and simulation and contingency planning. The entire exercise determined that exceptional pressures were likely to be encountered due to the presence of the salt dome in the vicinity of the alternative route compounded by a host of other probable risks.

For managing the risks, a predictive model was written based on the available geological data and its performance was monitored by comparing it with the actual performance data, generated during drilling. The variance between the predicted values and actuals revealed unanticipated hazards and informed action plans that engineers could use to deal with risks.

The oilfields today range from the icy Artic with its shifting icebergs and thawing permafrost to the raging storms at the deeper ends of oceans with their easily ignited submarine methane and the cavernous rocks of shale oil. Oil companies look to protect their multi-billion dollar investments in the projects and their staff from certain death if any of the risks are misjudged.

Fortunately, oil companies have accumulated multidimensional data now available on a standard platform. 3D video collaboration brings together the human talent from distant locations to crack the codes that help to improve the standards of safety and effectiveness.

A variant of this post was published in the now defunct Collaborative Planet hosted by UBM Techweb

Future of Healthcare in the USA: how it could be the growth engine

By Kishore Jethanandani

Kondratieff cycles, which span thirty to fifty years, are marked by breakthroughs in technology and reform of institutions that drive expansion and a downturn sets in as technologies mature and unnoticed dysfunction in institutions surfaces. This was true for information technologies over the last cycle. Computers were a curiosity as long as the use of behemoth mainframes was confined to some large enterprises. The turning point was the introduction of mini-computers and then personal computers in the 1980s which became, with the advent of the Windows user interface, as much a part of everyday life as the washing machine and the refrigerator over the 1990s.

Concurrently, regulatory reform of the telecommunications industry dismantled the AT&T monopoly; the costs of communications plummeted and paved the way for a pervasive Internet. Business solutions became the buzzword of the last decade as information technologies found new uses in a broad range of industries. For the consumer, the home computer and the associated software became as essential as the furniture in their home. The mobile Internet has growing number of applications that make the wireless phone as indispensable as a wallet. The momentum in the growth of wireless phones is expected to be maintained till 2014 partially offsetting the slowdown in the rest of the information technology industry.

The high growth rates that were propelled by innovation in information technologies in the 1990s began to slow down over the last decade. A recent study completed by well-known economist, Dale W Jorgenson of the Harvard University and his colleagues, confirms that growth rates decelerated over the last decade. The average growth of value added in the industrial economy over the period 1960 to 2007 was 3.45%, with a peak of 4.52% between 1995 and 2000, which slowed down to 2.78% between 2000 and 2007. The corresponding growth of the IT producing industries was an average of 15.92% over 1960 and 2007, peaked at 27.35% between 1995 and 2000, and slowed down to 10.19% between 2000 and 2007. IT using industries had an average growth of 3.47%, peaked at 4.39% and slowed down to 2.39%.

IT industry has ceased to be a growth engine for the economy and its place can be taken by a revitalized health industry. For one, there is the enormous demand to extend life and ease the pain of debilitating and uncommon illnesses that are growing with longer life spans. On the supply side, a welter of new technologies for drugs, medical equipment, sensors and information technologies are revitalizing the industry. While technologies for extending life and improving its quality raise costs, some of them can also be used to drastically lower cost in an environment that encourages competition. Thus the twin goals of lower costs and better quality accessible to everyone are possible if the industry is restructured to be driven by market forces.

Meanwhile, healthcare industry’s existing paradigm of medicinal chemistry has run its course. Much of the therapies for an ageing population are effectively palliative which alleviate chronic illnesses, like diabetes, Alzheimer’s and heart disease, without providing a definitive cure. As a result, the costs of health care are ballooning. Chronic illnesses account for a major share of the costs–75% of the total of $2.2 trillion spent on health care in 2007[i]–with 45% of Americans suffering from at least one chronic illness. At this point, the industry is a money sink—really a gorge—unable to meet the needs of the mass market for health. Consumers don’t get care worth their money.

At this point, there is very little effort at prevention of diseases to keep costs low despite the proliferation of devices to forewarn patients and take preemptive action. The onset of stroke, for example, can be detected early with sensing devices and the more damaging consequences such as brain damage can be preempted.

The costs of hospital stays are high and their costs are lowered with remote treatment of patients at home. Hospital readmissions are common among chronically ill patients and happen when patients are not monitored when they return home. A New England Health Institute study found that hospital readmissions can be reduced by 60% with remote monitoring compared to standard care and 50% when programs that include disease management with an estimated savings of $6.4 billion a year.

The inefficiencies in the industry are easily noticeable. Paperwork abounds which is conspicuous by its absence in other industries. Only 10% of the payments to vendors in the health sector are by electronic means which could save an estimated $30 billion. Electronic records can drastically lower administrative costs and help to create databases that can be analyzed to improve the quality of care.

The productivity of the medical personnel is low as a result of excessive paperwork. Nurses, for example, expend only 40% of their time attending to patients while the rest of the time is spent on paperwork[ii]. Radiologists spend 70% of their time in the analysis of images while the rest is reserved for paperwork[iii]. Regulatory compliance compulsions bog down skilled staff in paperwork and alternatives are hard to find when the law is inflexible.

Patients have to take time off from their work, suffer long and wasteful wait times to receive care for even minor conditions in this age of multi-media communications. It gets worse for rural folks who have to trudge to urban hospitals. Care of dangerous criminals is hazardous when they have to be moved from prisons to hospitals. When emergencies are involved, patients have to be moved from one hospital to another when the required specialized personnel are not available. An estimated 2.2 million trips between emergency rooms happen for this reason. Entrepreneurs could very well help to bring care to patients with remote care technologies and video conferencing. The estimated savings from video assisted consultations replacing transportation from one emergency room to another are $537 million per annum.

Today, technology is available to substitute for doctor’s skills and hospitalization while maintaining the quality of care without forcing down doctor’s salaries by fiat. An example is Angioplasty which replaces expensive surgery for the treatment of coronary disease[iv]. The entire procedure can be completed by a skilled technician or a nurse at a much lower cost than a specialist in heart disease. Innovations of this nature can expand the health market by tapping into the latent mass market for health care.

Therapies are not customized for each patient and often many different options are tried in several hospitals to no avail. The problem is the inability of doctors to pinpoint the root cause of the disease. Normal radiological methods don’t necessarily help to diagnose a condition. Lately, DNA sequencing methods, such as those offered by Illumina, helped to diagnose the causes of an inflamed bowel of a six year old child after a hundred surgeries failed. The DNA sequencing data can also be mined to glean insights about the most effective treatments.

Health care costs are high also due to the structure of the industry. In the health delivery segment of the industry, hospital monopolies in local areas and regions raise the cost of services. Consolidations of hospitals are believed to add $12 billion a year to costs. The value chain could be broken up and individual tasks performed with much greater efficiency. Physician owned specialized hospitals are fierce competitors of General Hospitals especially in states with lighter regulation. In the thirty two states in which they do exist, physician hospitals are the top performers in nineteen of them and among the top in thirteen[v]. Regulation prevents them from operating in the remaining twenty of them.

In the risk management component of the industry, insurance plans are not portable outside a state or a medical group. Most insurance plans are paid by employers or by the Government for the elderly. Users of insurance plans don’t have much incentive to shop for health care or realize savings by participating in wellness programs. Health Savings Accounts (HSAs) put consumers of healthcare in charge of their choices by letting them spend out of their own pocket or from savings reserved for the purpose. They are expected to drive down costs by shopping for alternatives, evaluation of the value they are receiving for the money they pay and by becoming more aware of their own health risks. The evidence about the experience with HSAs is mixed; enrollment has increased, premiums are lower compared to traditional plans[vi] and their rate of increases is lower. The differences are smaller when adjustments are made for the age and the risk of the buyers of consumer plans and the traditional plans[vii]. On the other hand, the evidence on quality of care received by buyers of HSAs is mixed as is the care they take to search for their best option and lifestyle choices to lower their healthcare costs.

Innovation will begin to drive the health care industry and become the growth engine for the economy when costs and the corresponding quality of service are transparent and comparable. Currently, costs are estimated for individual departments and not by conditions and individual patients[viii]. It’s only then that consumers will shop for the best offering and switch to the most competitive offering and entrepreneurs will know where they can find opportunities to compete with incumbents. Even by rough measures, the quality of care, for the same expense, varies enormously—by a factor of 82[ix]

The search for the most desired doctors and therapies, with the best value to offer for the price paid, is arduous. Price for medical services varies enormously and comparisons are hard to make because of a maze of discounts offered. Most consumers of health have no incentive to shop for health care as their insurance plans are paid for by their employers. Aggregation of information and electronic searching will lower the costs of finding the best doctor and therapies. As patients search for the best option, it will also stimulate competition among vendors. The experience with high-deductible plans does show that health costs drop when users do shop around, compare before they buy healthcare.

The inefficiencies in the health industry are an enormous potential opportunity for growth which will receive another fillip when new bio-tech products move up their S-curve. For the USA, with its lead in medical innovation, there is also an opportunity to expand overseas where again health systems are largely inefficient. The key to tapping the latent opportunity is an environment that encourages technological innovation and competition. Employer paid health insurance plans or Government paid health insurance will be most effective when they encourage individuals to buy their own insurance plans and manage their own risk. For the rest, billions of consumers shopping for health care will drive down costs much faster than any group insurance or a Government department. At this point in time, the health marketplace offers very little data to make comparisons or even the flexibility to switch from one source to another. The incentive for cost reduction and to adopt new innovations will be the greatest when vendors deal with a more competitive environment.

A new paradigm in health care will be possible in such a competitive environment. Genomics, together with bio-technology, nanotechnology and medical devices, health IT, and sensors, will help to launch treatments that have for very long been elusive and affect large masses of people. Additionally, they are customized for each patient, who could be immune to standard treatments, and they have a much greater focus on prevention. These technologies are also able to shorten the duration of the illness, replace damaged organs and reverse the degenerative effects of ageing. Hypertension and high blood pressure, for example, affects 20% of Americans and the available cures generally require life-long treatments. The discovery of the relationship between genetic mutations and extremes in blood pressure[x] has improved an understanding of possible preventive treatments for high blood pressure. Diagnostic tools, which detect changes in the protein structure, will anticipate the onset of dreaded diseases like cancer and enable advanced action before the disease becomes incurable, Early results are accurate and clinical adoption is expected to come soon[xi].

[i] “Almanac of Chronic Disease”, 2009.

[ii] “Growth and Renewal in the USA: Retooling America’s economic engine”, McKinsey Global Institute, 2011

[iii] “Strategic Flexibility for the Health Plan Industry”, Deloitte.

[iv] “Will Disruptive Innovations Cure Health Care?” by Clayton M. Christensen, Richard Bohmer, and John Kenagy in Harvard Business Review.

[v] Consumer Reports quoted in “Why America needs more Physician hospitals”, The Senior Center for Health and Security, August 2009

[vi] “Generally, premiums for CDHPs were lower than premiums for non-CDHPs in all years except 2005, when premiums for HRA plans were higher than premiums for non-CDHPs. By 2009, annual premiums averaged $4,274 for HRA-based plans, $4,517 for HSA-eligible plans, and $4,902 for non-CDHP plans. Note that the $4,517 premium for HSA-eligible plans includes an average $688 employer contribution to the HSA account. Hence, premiums for HSA-eligible coverage were $3,829 for employee-only coverage in 2009”, in “What Do We Really Know About Consumer-Driven Health Plans?”, by Paul Fronstin, Employee Benefit Research Institute

[vii] op cit

[viii] “Discovering—and lowering—the real costs of health care”, by Michael Porter, Harvard Business Review,

[ix] “When and how provider competition can help improve health care delivery”, McKinsey.

[x] “The future of the biomedical industry in an era of globalization”, Kellogg School of Management, 2008

[xi] “In this case Proteomics is being used as a diagnostic tool and early data from the projects have been very positive with the computer software managing to identify 100% of ovarian cancer samples (when compared to a healthy sample) and 96% of prostate cancer samples. With these positive results it is surely only a matter of time before diagnostic Proteomics is seen in a clinical setting”, in “Could Proteomics be the future of cancer therapy?

.

YouTube competes with commercial TV

By Kishore Jethanandani

Youtube’s harum-scarum expansion of goofy user-generated content is giving way to first steps towards professional content on premium TV channels. Sports content is the linchpin for commercial TV and will likely light the blaze of the trail to its long anticipated disruption by online social TV.

Youtube’s spending on premium channels has more than doubled in 2013 over 2012. The “grants” received by producers of content for premium channels increased from $100 million dollars in 2012 to $250 dollars in 2013 according to the numbers cited by a Needham Insights report. Content producers set aside all their revenues up to the amount of the grant to payback YouTube.

YouTube has not yet purchased broadcasting rights from mainstream sports clubs but it is able to gain the rights for broadcasting niche sports or a geography outside the main centers of the game. Skydiving is largely unknown outside small groups of hobbyists but found an audience of 8 million on Youtube when Felix Baumgartner did a sound barrier breaking jump. Professional Bull Riders Association (PBR), a game known to few people, saw an advantage in an all-digital strategy, in collaboration with YouTube, in order to expand its reach. Major League Baseball streams games on Youtube outside of the USA, NBA streams its minor league sports and the London Olympics expanded its reach to Asia.

The future of broadcasting rights for major league games is caught in a limbo as conflicting forces drive them in opposite directions. Cable companies need exclusive broadcasting rights to retain their customers and they are bidding at a higher rate to keep them. On the other hand, the hold of cable companies on broadcasting rights is tenuous as the audience shifts online to access content on mobile devices or any device. The number of households without any television jumped to 5 million in 2012 compared to 2 million in 2007.

Meanwhile, Youtube is able to cater to the demand by sports clubs to keep fans engaged with highlights of the game, interviews with sports men and women and peeks into the backrooms. One such Youtube channel is Love Football that has snippets of the game with captivating content like the footage on the goals scored, moments of suspense and moments of skillful maneuvering and reviews of the news.

Some of the more successful and competitive sports broadcasters supplement their live programming with partnerships with Youtube. ESPN, the leading sports broadcaster, has 1.2 million subscribers to its Youtube channel. Fox Sports, the upcoming challenger, has over 69,000 subscribers and sees their growth as a priority. They are looking to gain an edge with the interaction of fans with the content.

The demand for streaming media will only grow as 30 percent of cable subscribers have expressed their willingness to switch to streaming media. Youtube will be able to tip the balance of forces in the broadcasting industry in its favor when a high speed broadband network like the one Google is making available in Kansas is more widely available across the USA for superior quality programming.

Youtube’s premium channels will bring some of the benefits of commercial TV, such as ease of discovery, while keeping the fun of personalization of the Internet world. Not all of the freewheeling ways of the Internet world will be lost as fans will still be able to contribute their content in conversations, within the communities created by sports clubs, with the added benefit of convenient cataloging. A tough battle over broadcasting rights looms as the numbers of fans participating on Internet channels rises rapidly.

A version of this post was published in Digital Canvas Retail hosted by UBM Techweb