Intelligent street lights illuminate new applications

Networked city lights with cameras and sensors read data usable for traffic management

by Kishore Jethanandani

Smart cities have evolved beyond pilot projects testing digital services to the delivery of networked digital services. Their new pivot is multi-use platforms that integrate several data streams that are leveraged to improve the city’s infrastructure.

Street lights, for example, present an opportunity to create data networks by using each pole as a node to gather data, from a cluster of local devices, and feed them to several applications and platforms for generating services.

San Diego

San Diego reflects an emerging trend of using street lights in a pervasive computing and networking system.

“The replacement of aging street lights with LED lights not only created an opportunity to significantly lower energy costs but also to gather data by making them aware of their surroundings with audio, video and environmental sensors,” David Graham, deputy COO of San Diego, said in an interview with Telco Transformation.

Smart cities are also supplementing government grants with surpluses from energy savings to be able to fund larger projects.

“Private sector companies, with legal protection from energy savings performance contracts, are willing to make the initial investments in street lighting because cities agree to share the huge savings realized from the lower energy consumption by LED lights,” Ryan Citron, research analyst at Navigant Research, told Telco Transformation.

Data gathered from HD cameras installed on street lights also has transportation applications that help to optimize the timing of traffic signals at intersections to minimize congestion.

“Currently, we have adaptive signaling for traffic flow management at 30 intersections,” said Graham. “The data from the sensors on stop lights is analyzed to decide the intervals at which stop lights change. With the use of AI, we can make the street lights more adaptive not only by events but also the length of queues, holidays and many other variables. We have been able to reduce queuing by 40% in one of the corridors.”

Network choices
The growing scope of smart cities applications has created many new possibilities in improving a city’s infrastructure, but it has also created a dilemma for city network administrators. While their ideal choice for a network is fiber optics, this option can be cost-prohibitive for the current bandwidth needs of cities. Other popular low bandwidth and cheaper networks, like SigFox, are useful for microdata but could impede the future growth of higher bandwidth smart city applications.

Furthermore, multiple applications, consuming varying volumes of data, are built on top of a common platform. The data is not only for vehicle traffic management but also smart lighting to save energy, event and emergency management, smart parking, air quality monitoring or uses as varied as easing eye strain by changing the color of LED lights, crime prevention, surveillance, predictive failure notification, etc. Flexible networking is needed to route traffic cost efficiently, and meet service quality standards for a broad variety of applications.

Some solution providers improvise with help from analytics and make do with the least possible bandwidth in the short term.

“Analytics embedded in the cameras on street lights transmit only the results of the query requested for traffic management such as counts of traffic in a specific lane. Since the traffic flow in a region is interrelated, we can use the traffic data from the queries and pre-determined correlations between them to estimate the expected impact on traffic at proximate intersections,” Sohrab Modi, CTO and SVP of Engineering at Echelon, told Telco Transformation. Accurate estimates are achieved only after training the algorithms on a great deal of data.

Flexible networks
study conducted by Navigant Research “analyzed a dozen connectivity technologies and their suitability as a smart street lighting/city platform” and identified medium-band networking solutions as the best option for balancing cost and support for the most “high-value smart city applications.”

PLC, a medium-band network, has been widely used in European countries because it provides network connections on powerlines already connected to street lights and saves on upfront capital costs. In combination with RF-Mesh, a peer-to-peer wireless network, it maneuvers around obstacles such as tall buildings. PLC is less flexible being hard-wired, but that also makes it more secure.

Narrowband options like LPWAN are very inexpensive and have long battery life but by themselves cannot serve the needs of several applications. Carriers are launching NB-IoT and LTE-Cat-M1 which provide the security of licensed spectrum while the other narrowband networks use free unlicensed spectrum. Broadband connections like 3G and 4G are ubiquitous and can serve the bandwidth needs of multiple applications. WiFi is a cheaper broadband network because it does not use licensed spectrum and it can aggregate traffic from several devices.

Smart cities can prepare themselves for their future needs by subsuming these networks into an overarching software-driven network with centralized controls. The intelligence of centralized controls will help to route traffic to any of these networks depending on the needs of individual applications and their users.

Smart cities have learned to build the foundations of intelligent services that can serve a variety of needs valuable enough for consumers to be willing to pay for them. As more services are offered on the same platform, the incremental costs of each of them decline. Software networks will keep the costs of network expansion low by making the most of the capabilities of networks already in place, and city administrators can future-proof their networks and focus on creating the environment for more innovative services.

Global multicloud webscale networks nip spikes in traffic

Global networks capitalize on heterogeneous network resources to reap applications

by Kishore Jethanandani

Heterogeneous applications and multiple clouds are characteristic of global webscale networks. Traffic flows in such interdependent networks snowball unexpectedly; spikes in application use is endemic which degrades their performance. At its worst, the failure of an application has a domino effect on the network and a catastrophic collapse ensues.

Optimization of web-scale networks irons out their many wrinkles, automates operations, and speeds up responses with predictive algorithms to preempt network outages by deploying resources fast enough to keep pace with anticipated traffic.

Emergence of web-scale networks 

The Twitter Inc. engineering team revealed the details of its redesign for web-scale operations that began following the 2010 World Cup when spikes in traffic disabled its network repeatedly for short periods of time. By August of 2013, Twitter’s infrastructure was robust enough that nothing untoward happened when the traffic surged 20 times over the normal rate during a similar Castle in the Sky event in Japan.

Twitter shifted to a virtualized and microservice-based architecture to gain flexibility. Tweets were assigned an identity so they could be stored on any storage device in a distributed network. Further improvements were made after 2014 to provide route options, distribute resources to the edge and to enable granular optimization with policy management. Similar approaches have been adopted by companies such as Google (Nasdaq: GOOG), Microsoft Corp. (Nasdaq: MSFT) and Facebook .

Investments in bandwidth alone are not enough to cope with traffic flows as they are increasing exponentially due to the growth of the Internet of Things, speech, image, and video data. Web-scale networks streamline processes to avoid local choke points and to increase the overall availability of the network with optimization.

Software-driven performance improvement 

Virtualization and microservices — along with managed services platforms — play a critical role in optimizing the network. Microservices are tools to wring out the inefficiencies by aligning processes with data flows to reduce latencies and increase availability in web-scale networks.

“Microservices are focused on building small services that provide a single piece of functionality,” said Eric Peffer, cloud consulting practice lead, World Wide Technology. “You string several of these microservices together for more advanced functionality. Platforms such as Kubernetes, Pivotal Cloud Foundry, Docker Swarm, Service Fabric and AWS Elastic Beanstalk provide the management and tooling to control the elaborate coordination of the strings of microservices. The data flows are speeded up by abstracting functionality for a series of processes that are aligned to data flows from their source to the destination.”

Software-defined networks also have flexibility in choosing the means to move traffic so that local choke points do not necessarily slow down movement. The operations needed to make or change choices can be executed automatically at the application level.

“There are services available for moving data from one application to another, such as caching, data grid services and message queuing, allowing you to adapt to changes and maintain a consistent flow of data,” Peffer said.

Intelligent network operations 

Web-scale networks interconnect multiple clouds as global enterprises extend the reach of their applications to branches and partners around the world without sacrificing performance. Enterprises want to make their applications and microservices portable across several clouds and interweave them on-demand. Enterprises with a hybrid cloud strategy rose to 58% in 2017 up from 55% in 2016.

VMware Inc. (NYSE: VMW) has built a native cloud on top of Amazon Web Services Inc. for geographical reach, availability, and flexibility. VMware’s multi-cloud management stack, along with its Cloud Foundation and NSX platforms, enables portability across clouds. The bedrock of the management platform is a policy management tool for micro-segmentation of the cloud.

Another survey found that the two most important motivations for multi-cloud strategy was more efficient workloads (73%) and more agility (69%.) Currently, one third of the enterprises want to support multiple clouds for synchronizing applications across them or for workload and data migration. For the future, 42% want most of the resources to be used for management and orchestration across multiple clouds. The intelligence and the management tools are advancing to cope with the increased complexities.

The policy management software plays a supplementary role to the traditional OSS/BSS systems.

“The policies define the parameters for security, configuration, the footprint of the application, edge or core, mini-datacenter, traditional data centers, resource use, networking performance metrics and more,” said VMware’s Gabriele Di Piazza, vice president of products and solutions at Telco NFV Group.

“OSS/BSS systems have been undergoing a significant transformation with IP-based services, which also involved data collection of application and network performance to calculate KPIs,” Di Piazza said. “Machine intelligence does dynamic analysis of data to understand the key determinants of performance to predict network behavior and performance. This is needed to reduce the mean time to repair, take proactive action to prevent failures, or to scale capacity before it falls short. Our acquisitions of Wayfront and Arcane are our investments in real-time gathering of data and predictive algorithms.”

The performance needs to be fortified at every level to maintain a consistency on global networks. Web-scale networks such as Google and AWS have tools to auto-scale in response to surges in traffic; they can spin out new instances when the traffic surges.

“Performance behavior in verticals like e-commerce have their unique characteristics which we identify with our telemetry data,” said Anand Hariharan, vice president of products, Webscale Networks Inc. “The traffic can surge by as much as a hundred times following events like celebrities posting a picture on Instagram with their products. We have written an algorithm to forecast traffic surges specifically for e-commerce to deploy more instances to keep pace with demand growth.”

Web-scale networks are buffeted by a myriad of factors at the local and global level in addition to the demands of new applications and sources of data. They have more choices of routes and network configurations for design and automation tools to adapt in the moment. Machine learning will continue to evolve as more data is gathered and innovative ways are found to direct and optimize traffic.

A version of this article was published by Light Reading’s Telco Transformation



Future cloud-native networks could speed up app development

Applications development with heterogeneous resources on networks speeds up with DevOps and containers

By Kishore Jethanandani

Future networks are going cloud-native with a wide range of ramifications for the speed of applications development. Developers will be freed up to create new solutions unencumbered by hardware constraints or even the choice of platforms.

Software-defined open service provider networks are following in the footsteps of datacenters and storage devices — they are becoming pools of network resources that can be used interchangeably by multiple clients in several regions.

In characteristic cloud-like manner, they will potentially serve a flow of variable services, in volumes and types and on-demand versus on-premise IT deployments. In this scenario, service flows are best able to move with demand currents via containers that can be added or subtracted as needed.

The heterogeneity of resources, operating systems, equipment vendors and services on telecom service provider networks is expanding as the epicenter of services delivery sprawls towards the edge in order to support the Internet of Things, big data analytics, mobile and wearable devices, and autonomous cars now and in the future. The demand for services waxes and wanes at these edge points synchronously or asynchronously. Thus, the service providers need the flexibility and elasticity of containers to scale up and out to serve a diversity of needs with resources that are not encumbered by their platforms, protocols and by hardware.

The development of applications with containers seamlessly dovetails into operations and deployments enabled by a growing range of scheduling, management, and orchestration platforms.

Containers become far more portable than virtualized machines [VMs] as they abstract from not only hardware but also from operating systems. Stateless containers go a step further than statefull and decompose the configuration and operating data of containers. The state data for configuration and operations are stored in a database and is invoked when services are generated.

Service generation with containers in the telecom world make greater demands than in the enterprise. For one, service providers are bound by the stringent quality of service contracts. Secondly, telecom companies live in a more distributed and heterogenous environment with considerable legacy resources.

Containers, workloads, and operations 
DevOps is a sequence of business processes starting with application development by a team of developers, which is followed by testing for bugs by developers. Then applications go through staging or a process of testing for the desired operating performance ending with production. Operations have been historically a valley of death for developers where many applications floundered because they could not work in its environment. Containers seek to smoothen the transition from development to production with continuous delivery methods with tools from Jenkins, ChefAnsible Inc. , Docker Inc. and Mesosphere using a variety of plug-ins.

Container images play a role in enabling a distributed team of developers to write code and use it in any environment. They automate the tedium of manually ensuring that the code works in any IT operating environment with their dependencies, such as linked databases and firewalls, and the attendant configuration from one group of developers who could be, for example using Mac, to another using Windows.

Containers on telecom networks 
Deployment of code into the production environment of a telecom service provider is an exercise in scaling while also ensuring security and quality of service. It includes the processes of clustering containers and joining them with resources and functions on the network at the desired nodes to generate a service.

New age tools like Mesos achieve scale by abstracting all network resources and functions That can be invoked by a single operating system hosted on a datacenter. Verizon is one carrier that is using Mesos for its hyperscale operations. Verizon Lab’s Damascene Joachimpillai, director of technology, explained the rationale for containers and management, as well as orchestration platforms such as Mesos, as opposed to virtualized machines.

“Most applications — IoT or otherwise — have multiple cooperating and coordinating tasks. Each of these tasks has specific resource requirements,” Joachimpillai said. “One can bundle them into a single monolithic application and provide management using a virtual machine, or deploy them independently. When one deploys these tasks as microservices, one needs a scalable resource scheduler… If they were run on bare metal, then redundancy and resiliency of the application must be considered — and one needs to provide an application management entity that monitors the health. Most of these needs and constraints are removed when using containers and an application orchestration system like Mesos.”

The production environment of a network does not only use containers, or would necessarily do it in the future, so means must be found to interlink with options such as virtual machines and physical resources, regardless of the IT environment.

“When you get into a production environment where you have workloads on physical or virtual assets or on the cloud, it is a whole new world… Instead of using multiple platforms for a diversity of workloads, we have a single platform for all of them,” Hussein Khazaal, head of marketing at Nuage Networks , said.

In the labyrinth of a network of this nature, with the sprawl growing with containers, security threats lurk and customer experience can suffer as the likelihood of failures abounds.

“We automate monitoring and responses to security events or failures through our Virtualized Security Services (VSS) features and integrations with security threat analytics solutions from our partners,” Hussein added. “VSAP [Virtualized Services Assurance Platform] can correlate failures in the underlay with impacted workloads in the overlay, so that operators can quickly identify faults and make corrections with minimal effort.”

The emerging software-driven network gains agility and flexibility by threading together several swarms of containers, virtualized and physical networks, abstracted resources and functions that are held together by data and intelligence for visibility, automated responses, and monitoring tools for failure prevention, optimization and quality assurance. Containers help by bundling together interrelated components of a larger application and making them reusable for ever-changing needs.

A version of this article was previously published by Light Reading’s Telco Transformation

Cognitive AI: the human DNA of Machines

by Kishore Jethanandani

Cognitive computing lends the five senses of humans to machines

Cognitive artificial intelligence (AI) is a step change in machine intelligence with added data from image recognition, speech recognition, video and audio recognition in consumer and enterprise network applications.

As a result, service providers will be saddled with exponentially higher data volumes spread over many more edge nodes on distributed networks, all of which makes them more susceptible to wilder traffic spikes than ever before.

Applications 

Microsoft‘s consumer application for the blind, which allows them to be mobile, epitomizes the spectrum of cognitive artificial intelligence capabilities. The blind perceive objects with video recognition and receive environmental data from sensors to navigate freely. Cloud and network-hosted machine intelligence processes all of that data in the background.

Enterprise applications — boosted by APIs used by Amazon’s Alexa, among others — have focused on customer service and accelerating business processes. Boxover, for example, integrates CRM databases and speech recognition for airlines to be able to notify customers about missing bags communicated by chatbots. Information flows seamlessly on distributed networks from operations, to customer data, and onwards to a chatbot on a passenger’s smartphone.

The euphoria over chatbots in 2016 has waned as consumers are discouraged by the wrinkles in their design. Investments in natural language processing and other types of cognitive AI, however, are growing unabated. A MIT Technology Review survey found that companies currently investing in machine intelligence are focused on natural language processing (45%) and text classification and image recognition (47%), among others. For this year, text classification (55%) and natural language processing (52%) are among the top priorities for machine intelligence planners.

Service providers 

Cognitive AI applications that process multiple streams of data are best done in real-time on clouds and telecom networks. Lightbend has a platform designed for cognitive AI-enabled enterprise applications that is based on the Reactive Fast Data platform — built on top of Scala and Java — to address the needs of elasticity, scalability, and failure management.

“A new approach is required for developers leveraging image, voice and video data across many pattern recognition and machine learning use cases,” said Markus Eisele, developer advocate at Lightbend. “Many of these use cases require response times in the hundreds-of-milliseconds timeframe. This is pushing the need for languages and frameworks that emphasize message-driven systems that can handle asynchronous data in a highly concurrent, cloud-based way — often in real-time.”

Microservices are key to keeping pace with the multitude of variations in data flows.

“Cognitive applications are often a set of data sources and sinks, in which iterative pipelines of data are being created,” Eisele said. “They call for low latency, complex event processing, and create many challenges for how state and failure are handled. Microservices allow composability and isolation that enables cognitive systems to perform under load, and gives maximum flexibility for iterative development.”

Network solutions 

Cognitive AI’s ability to process a broad spectrum of data and content enables it to find solutions for daunting challenges such as zero-day cyber-security threats, which are known for their sabotage of the Iranian nuclear program, that elude search engines by conducting their operations in the shadowy world of the subterranean dark net. They are spotted by microscopic classification of data and content found by scouring the Internet, with machine learning algorithms able to parse any kind of file,and ferret out those related to cyber threats and malware.

SparkCognition partnered with Google (Nasdaq: GOOG) to leverage TensorFlow, which is an interface designed to execute machine learning algorithms — including those capable of pattern recognition in cognitive data — to be able to identify threats lurking in millions of mobile and IoT devices.

“Signature based security software is updated periodically and falls short for protection against zero day or polymorphic malware,” said Joe Des Rosier, senior account executive at SparkCognition. “Our algorithm dissects the DNA of files suspected to be malicious and is deployed as a microservice to run on the endpoint [such as a mobile device] or in the cloud. Unsupervised self-learning aspects of our algorithm helps to keep pace with the changing landscape of cybersecurity.”

Manual inspection of equipment and facilities, spread geographically and in neighborhoods, is not fast enough for enterprises to make replacements in short order to avoid downtime. A customer of SparkCognition had 20,000 vending machines sending unspecified alerts with no way of separating false positives.

“Cognitive AI helps to characterize failures with visuals for parts and natural language to parse manuals,” said Tina Thibodeau, vice president of strategic alliances and channels at SparkCognition. “We use historical and real-time data to pinpoint the causes of expected failures with a long enough lead time for the customer to be able to act. Our service provider partner provides the connectivity, the data layer, and the business logic.”

Future of cognitive AI 

Robust cognitive AI systems are works in progress as their information architecture is honed for perfection by learning from the false starts of early applications.

“The value of any AI technology, not only cognitive, hinges on its knowledge base, content base, and the database curated to retrieve valuable information,” said Early Information Science CEO Seth Earley. “A bot or a digital assistant is a retrieval engine, a channel, which needs an information architecture, knowledge engineering, and data integration before an AI engine can be trained to find the relevant information.”

Cognitive AI poses some unique challenges, according to Earley.

“Knowledge engineering for natural language considers its interactive nature,” he said. “When somebody has a query, the bots respond, and human agents affirm or make corrections in the classification of the intent. The learning algorithms evolve when humans edit the responses in increasing number of conversations. In speech recognition, the process starts with capturing the language of a message. Its interpretation, or the intent, follows and requires human effort.”

Earley said that deciphering accents was a part of speech recognition that improves as learning algorithms read the nuances in any expression. For video recognition, vectors and vector spaces — with clusters of the characteristics of objects — are used and people help to compare and identify them, Earley said.

The virtuous circle of adoption, improvement and redesign of applications has begun for cognitive AI. While still far from perfect, there is enough interest to advance its commercial viability.

Previously published by Light Reading’s Telco Transformation

Mesh networks open IOT’s rich last mile data seams for mining

By Kishore Jethanandani

Mesh networks (or the alternative star topology networks connecting devices to routers) afford the mining of data in IOT’s last mile. By interconnecting mobile devices, mesh networks can funnel data from sensors to local gateways or the cloud for analysis and decision-making.

Wired and wireless telecom networks do not reach the distant regions or the nooks and crannies for the mining of information-rich seams of data. Mining, oilfields, ocean-going vessels, electricity grids, emergency response sites like wildfires, and agriculture are some of the information-rich data sources rarely tapped for analytics and decision-making in real-time or otherwise.

Where telecom coverage is available, it does not necessarily reach all assets. Data generated by sensors embedded in equipment on factory floors, underground water pipes in cities, or inventory in warehouses cannot readily access cellular or wired networks.

A typical case of a remote field site is that of an oil exploration and production in California with dispersed wells where ten operators gathered data on tank levels, gas flows, and well-head pressures. Now with a mesh network, operating managers can access this data anywhere and respond to alerts in real-time.

Onsite mesh networks are deployed for microscopic monitoring of equipment to watch for losses such as energy leakages. Refineries are labyrinths of pipes with relief valves to release pressure to avoid blow-ups. One of them in Singapore had one thousand valves to monitor manually. These valves do not necessarily shut tightly enough, or need maintenance and gases trickle out. Over time, the losses add up to a lot. Acoustic signals can pinpoint otherwise unnoticeable leakages and transmit the data via mesh networks to databases; any deviation from pattern prompts action to stop the losses.

The prospects of on-premise mesh networks adoption have improved with the emergence of smart Bluetooth and beacons. With smart Bluetooth technology, an IP layer is built on top of the data layer for ease of connecting devices. Beacons are publicly available for anyone to use for building networks.

We spoke to Rob Chandhok, the President and Chief Operating Officer at San Francisco-based Helium Systems Incorporated, to understand his company’s approach to mining the data in IOT’s last mile. Helium’s current solutions target the healthcare industry and in particular its refrigeration and air conditioning equipment. “Hospitals have hundreds of refrigerators to store medicines which are likely to be damaged if the doors are inadvertently left open,” Rob Chandhok explained to us.

The touchstone of Helium’s technology is its programmable sensors embedded with a choice of scripts capable of rudimentary arithmetic like calculating the difference in temperature between two rooms. As a result, the sensors generate more data than would be possible with the investment in dumb hardware alone. Helium uses star topology for the last mile network connected to the cloud which hosts a platform for analytical solutions. The entire system is configurable from the sensor to the cloud for generating data for the desired thresholds and alerts or analytical models.

“The architecture is intended to optimize processes across the system,” Rob Chandhok told us. He illustrated with an example of the impact of pressure differences; germs are less likely to enter if the internal pressure is higher than the external pressure.

Configurable sensors help to tailor a solution to the outcome desired. Vaccine potency is the greatest if the temperature stays in the 2-8 degrees centigrade (35.6F-46.4 F). By contrast, cell cultures are rendered useless, and thousands of dollars lost, if the temperature falls out in the range of 37 degrees (plus or minus 0.5) centigrade.

In the hospitality industry, the purpose is to improve customer service by keeping temperatures in range to minimize discomfort. Guests do not have to wait until air-conditioning brings temperatures to the desired levels which vary by region and seasons.

The roster of solutions expands as Helium learns more about the clients’ problems. In the course of working with customers in hospitals, Helium was made aware of the routine underutilization of beds. Speaking of future plans, Rob Chandhok said, “We can improve the rate of utilization of beds in hospitals with automatic and real-time tracking with infrared sensors.”  Currently, nurses manually record the state of occupancy of beds usually with a lag. “Nurses are unable to keep pace as patients are moved after short intervals before and after operations,” Rob Chandhok explained.

For a field site application, we spoke to Weft’s Ganesh Sivaraman, Director of Product Management,
as well as Erin O’Bannon, Director of Marketing, about its supply chain solutions. The company uses mesh networks to determine the location and condition of cargo on ships, their expected time of arrival, the inflow of cargo once ships are docked at ports, and the extent of port congestion. “The mesh network brings near real-time visibility to the state of flow of the cargo,” Ganesh Sivaraman told us. However, he clarified that currently, its applications
do not afford the tracking of cargo at the pallet level and their flow in the supply chain. “We use predictive analytics, using proprietary and third-party data sources, to help clients time the deployment of trucks to pick the cargo with minimal delay,” Erin O’Bannon told us. “With visibility, clients can anticipate delays, which lets them plan for alternative routes for trucks to shorten delivery times or switch to air transportation if the gain in time is worth the cost,” Erin O’Bannon explained.

Mesh networks will evolve from vertical to an array of horizontal solutions. Home automation, for example, will likely be linked with fire prevention services and with the connected cars of homeowners. Analytics companies can potentially create duplicative infrastructure left to themselves. We spoke to Shilpi Kumar of Filament,  a company specializing in mesh connectivity for industries, to understand how this evolution will shape the architecture of last mile IOT networks. “Decentralized mesh infrastructure-as-a-service serves the needs of multiple analytics companies with network policies enforced by blockchain-based contracts,” Shilpi Kumar, Product Development, told us. “The interlinking of mesh networks with a secure overlay prepares the way of exchanges between devices in an ecosystem such as vehicles paying for parking automatically,” Shilpi Kumar revealed.

Mesh networks expand the universe of the Internet of Things by making remote data sources accessible. They also raise the level of granularity of data sources that are nominally reachable with existing networks. As a result, these mesh networks expand the array of opportunities for optimizing business processes.

Global Supply Chains: the connecting tissue for dispersed supply centers

International Procurement Operations (IPOs) are the nerve centers of decision-making for efficient global procurement operations. Expansion into low-cost countries brings within the fold of an extended enterprise a coalition of suppliers, buyers and logistics companies who can be more productive when they work in concert. IPOs forge a network, whose members are initially tenuously tied to each other by their transactions, into an interconnected global procurement network joined together by long-term relationships. The several poles of decision-making local to a department, business unit or geography are merged into a synchronized management process that spans the global procurement network.

Hedge Funds: alpha for the masses

The alpha risk for hedge funds
The tsunamis of financial risk Pension funds are facing the prospect of an exponential increase in withdrawals from retiring subscribers -- despite the shrinking value of their asset base. In this scenario, hedge funds provide a powerful tool for effective liability hedging. Pension funds are looking for the last hope to cover for their unfunded liabilities.

 

 

 

 

 

 

The specter of inflation has increased the appetite for capital preservation among endowments and foundations – and these investors have targeted rates of return that can’t be achieved with today’s low-yielding bonds. Alternative investments in global real estate, natural resources have a chance of making up for the low returns.

Predictive Analytics: ready for surprises

 

Customers can now see that the early CRM technologies had a modest objective of accumulating transaction data. The truth is that the “irrational optimism” about CRM clouded judgments in the 1990s. The “irrational pessimism” that ensued missed the promise of CRM, i.e., the ground had been prepared for decision support solutions including predictive analytics.

 

 

Customers can now see that the early CRM technologies had a modest objective of accumulating transaction data. The truth is that the “irrational optimism” about CRM clouded judgments in the 1990s. The “irrational pessimism” that ensued missed the promise of CRM, i.e., the ground had been prepared for decision support solutions including predictive analytics.

Balanced scorecards: numbered for excellence

The dashboard for performance
Balanced scorecard

Autonomous business units within a larger corporation have a life of their own far removed from the efficiency concerns of headquarters. The balanced scorecard methodology lets enterprises delve deeper into financial numbers to understand the root causes of their performance. Enterprises want to share the information with their employees to pinpoint problems much faster and use actionable intelligence to correct errors.

 

Mobile Collaboration in enterprises: Latencies and Real-time Decision-making

Technological barriers to collaboration on the fly are beginning to be broken. On-premise video-conferencing solutions exist in the market today while mobile video-conferencing is in its infancy. Larger teams will need ability for point-to-multi-point video-conferencing solutions. A project manager, for example, will like to share visuals with multiple members of his team executing a task. Video conferencing is now possible with mobile devices and they can handle up to ten participants. Communications with multiple members of a team are likely to result in media clutter which can be reduced with selective role and context-specific distribution of content.

Read the full white paper here

Mobile Collaboration in Health Sector

Wider adoption of remote care is now possible with the panoply of technological tools now available such as sensors, medical devices, smart mobile devices and internet video. The sensors let doctors read vital signs from a distance, robots let them do hospital rounds more frequently without being present, video lets them see a patient in another location, smart mobile devices with embedded cameras let them track emergency situations in real time and internet connections let them download images from distant storage devices.

 

Read the full white paper here