Use Cases


A typical oil rig has 30,000 sensors generating data, but less than one percent of that data is currently being used for decision making. Think of the potential we are missing in leaving all that data unprocessed, not only in increasing production efficiency but also protecting human life and the environment. With the current bandwidth, it would take 12 days to move just one day’s worth of data from an oil rig to the cloud. Even with 5G, there is simply not be enough bandwidth to move all the edge data to the cloud. Even if you could, the cost is impractical.

In order to achieve the unrealized potential, the big data processing and machine learning must be on the oil rig itself. Hivecell makes that possible.


Renewable Energy

Windmills and solar farms must squeeze every bit of efficiency. The key to increased efficiency is data, not only the scientific analysis of performance but also the accurate prediction and response to environmental changes. The volume of data is huge and the compute power is intense. Moving all that data to the cloud is impractical and expensive. The answer lies in edge computing. The answer lies in Hivecell.


Quick Service Restaurant Chains

The connected kitchen is a reality. Friers, freezers, coffee makers, grills, each new appliance produces digital metrics of its performance and provides digital controls to fine tune its efficiency. How will you take advantage of this new capability? How will you continually deploy new software to thousands of locations?

Computer vision is another dimension, enabling automatic monitoring and measuring of staff efficiency in both kitchen and point of sale operations. However, those information streams can only be efficiently processed onsite. How will you deploy new machine learning models to make your human operations continually smarter?

The answer to these and many other questions lies in the unique size and capabilities of Hivecell.



The industrial internet of things, the industrial internet, industry 4.0, call it what you will. There is a clear and present demand to apply streaming big data and machine learning technology to the existing sensor data and machine-to-machine communication on the factory floor. Hivecell is the ideal platform for deploying and continually evolving distributed software in the distributed environment of manufacturing. Need high availability, stack two or more Hivecells together at the station. Need to expand the compute or storage at a station, stack more Hivecells.

The simple linear scalability of Hivecell is critical for the every changing manufacturing environment. Combine that with our simple remote software deployment, and you have the answer for deploying and managing the intelligence you need across the factory floor.



Healthcare is an information intensive business both in volume, risk and security. In a hospital, lives literally depend on the reliable availability of information. Every hospital faces the same challenges: where to put the computers? Space, electricity and cooling are a premium. The network closets on every floor are already full, yet every day brings new demands for compute power.

You can place Hivecells anywhere: on a bookshelf, on a desktop, in a cabinet. Each Hivecell has a built in battery back up, so there is no need for bulky uninterrupted power supplies (UPS). Hivecells directly connect which reduces both your wireless and wired routing load.



Weather is big business. Weather affects many industries such as agriculture, transportation, tourism, construction and retail. Companies in these industries gain a competitive advantage by understanding how weather affects their customers behavior and proactively anticipating that behavior through accurate weather forecasts.

Raw weather data is 40 terabytes per day and growing. The cost to move that much data to the cloud is $80,000 per month. The cost to store one year of that data (15 petabytes) is $300,000 per month.

Weather data is geographical by its nature. Most consumption and use of weather data occurs in the same location that weather data is generated. For instance, people and companies in Dallas are mostly interested in weather data about Dallas.

It is illogical to move all weather data to one location and have all users access the data they need from the same one location. Instead, it is more logical to store, analyze and access weather data where it is created.

The challenge to geographically distributed data processing becomes how to efficiently manage the necessary distributed compute power. That is precisely what Hivecell is designed to solve.


Data Science

Hivecell's built in patent-pending provision system enables developers to install Hadoop on a cluster with a single click of a button. Hivecell supports many other distributed frameworks, including Kubernetes, Spark, Tensorflow and Kafka.

Hivecell is designed for machine learning. Each Hivecell has a 256 CUDA core GPU, so data scientists can now have the resources they need to inexpensively and securely design, train and test models at their desktop.

With Hivecell, installing a distributed software framework is as easy as installing an app on your smartphone. Developers can focus on developing and data scientists can focus on analysis with the convenience of multi-server compute power on their desktop.