- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
Navigating the Growing Data Tsunami with Edge Computing
From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed, and analyzed.
At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end users. Where data has traditionally lived in the datacenter or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.
This is where edge computing comes in.
What is edge computing?
Edge computing is a distributed computing model in which data is captured, stored, processed, and analyzed at or near the physical location where it is created. By pushing computing out closer to these locations, users benefit from faster, more reliable services while companies benefit from the flexibility and scalability of hybrid cloud computing.
Edge computing vs. cloud computing
A cloud is an IT environment that abstracts, pools, and shares IT resources across a network. An edge is a computing location at the edge of a network, along with the hardware and software at those physical locations. Cloud computing is the act of running workloads within clouds, while edge computing is the act of running workloads on edge devices.
You can read more about cloud versus edge here.
4 benefits of edge computing
As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.
Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible.
1. Improve performance
When applications and data are hosted on centralized datacenters and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.
2. Place applications where they make the most sense
By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains, and airplanes.
3. Simplify meeting regulatory and compliance requirements
Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in datacenters or the cloud.
With edge computing, however, data can be collected, stored, processed, managed, and even scrubbed in-place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from video before being sent back to the datacenter.
4. Enable AI/ML applications
Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.
But AI/ML applications often require processing, analyzing, and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.
Red Hat’s approach to edge computing
Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability, and manageability.
Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations.
The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem.
The Red Hat Edge portfolio includes:
The Red Hat Edge portfolio allows organizations to build and manage applications across hybrid, multi-cloud, and edge locations, increasing app innovation, speeding up deployment, and updating and improving overall DevSecOps efficiency.
To learn more, visit Red Hat here.