Parallel Reservoir Computing – Realizing Next Generation AI Research

Parallel Reservoir Computing

Reservoir computing (RC) is anticipated as next-generation artificial intelligence (AI).1 Modern Deep Learning training models and data are often very large and may require distributed computation to complete in a timely manner. RC is fast and simple learning compared to other recurrent neural networks.2 Parallelization allows the RC approach to handle chaotic systems of almost any size, as long as proportionate computer resources are dedicated to the task.3

Schematic overview of the Reservoir Computing technique

Reservoir Computing4

While Hardware or Neuromorphic approaches to RC can be years away,  BOSS's Distributed Training capability is being leveraged to implement parallel RC efficiently in software now to allow the exploration of RC use.

BOSS Enterprise AI Platform

The BOSS Enterprise AI Platform enables businesses to differentiate and develop applications from data powered AI innovation. By Capturing, Securing, and Harnessing data, Enterprises can turn that data into Enterprise AI Outcomes. The BOSS platform is secure, compliant, and leverages state of the art open source technologies for the complete end to end data/machine learning business pipeline. Where needed, BOSS augments these capabilities with innovative research and development to make Enterprise AI easy to leverage in learning from data and growing business outcomes.

Distributed Training

The volume of data and the size of the deep neural networks used to generate accurate results can be challenging. The computing infrastructure required to train a deep neural network at scale can become cost prohibitive. 

pic3

The BOSS team has performed extensive research on capabilities to efficiently coordinate massively parallel computers on very large data compute challenges. The BOSS team has applied this capability to efficient computing for deep learning in its patent pending technology. This makes running deep neural networks much faster and much less inexpensive. Large training jobs can run in a more timely fashion and allow data science iterations to occur more quickly.

BOSS Implementation of Parallel Reservior Computing

BOSS has leveraged its distributed training capability to implement a Parallel Reservoir Computing alternative to Deep Learning Recurrent Neural Network (RNN) and Long Short Term Memory (LSTM) approaches to Natural Language Processing. The results, including performance, scaling and accuracy, are being showcased at Super Computing 2018 in Dallas Texas.

image.png.f9d561e2969a7c80f94840f26d59451b

By leveraging this research and development, BOSS is allowing more and more businesses to turn data into Enterprise AI outcomes. Contact our Data BOSSES today. 

References:



Tags


Subscribe to the Latest Insight

By clicking "Get the Updates" you are agreeing to the Terms of Use and Privacy Policy.

Achieve AI at Scale

Read our whitepaper to discover how BOSS.AI can help your organization's AI initiatives succeed quickly and easily, at scale

Popular Posts

>
Success message!
Warning message!
Error message!