- Windows 11 24H2 hit by a brand new bug, but there's a workaround
- This Samsung OLED spoiled every other TV for me, and it's $1,400 off for Black Friday
- NetBox Labs launches tools to combat network configuration drift
- Navigating the Complexities of AI in Content Creation and Cybersecurity
- Russian Cyber Spies Target Organizations with Custom Malware
Leveraging Microsoft AI: A game changer for manufacturing
The manufacturing edge is the hardware and software constellation on the plant’s premises. It encompasses the full range of end point devices and often a localized data center for storing data and running analytics, monitoring, and other applications.
But organizations are implementing edge strategies that do not consider AI and GenAI requirements. The results are inefficient utilization of edge resources, needlessly complex machine learning models, and impractical use cases, all of which lead to slow or suboptimal adoption by end users.
To deploy and optimize AI on the edge, manufacturing IT leaders should:
- Segment the plant/assets: Grouping the plant/assets appropriately ensures balanced edge architecture.
- Design with the end in mind: Taking a long term view of costly edge hardware investments helps keep costs optimal.
- Miniaturize AI models: Reducing model size reduces the computing resources needed to train and run it, speeds the model’s operations, and enables AI to be more widely distributed in the organization.
- Optimize network load management: Processing data on the edge minimizes the volume of real-time transiting over the network and reduces latency.
At the edge, AI can work locally with local data generated from a plant’s operational technology layer, including PLCs, controllers, industrial PCs, IoT sensors, cameras, RFID tags, and more. With latency minimized, locally deployed AI enables autonomous operations (not just automated operations) and real-time responsiveness.
Evaluate a “mesh-of-edges” or edge mesh approach
The above considerations suggest an approach that can be called a “mesh-of-edges.” This approach hosts optimally-sized ML/AI models on optimal compute resources. It provides the ability to scale the architecture for future requirements and it keeps costs in check.
As a result, edge AI can be leveraged effectively for tasks such as real time production monitoring, inventory management, real-time machine fault prediction, process optimization, quality control automation, production line diagnostics, and much more.
A representative mesh-of-edges is shown in the diagram below. Each edge hosts the ability to compute low- to medium-scale machine learning calculations.
Tata Consultancy Services
Prioritize the art of miniaturizing AI models
Miniaturizing AI/ML models greatly helps in reducing the size and cost of the compute resources needed on the edge. There are various techniques for optimally sizing and then miniaturizing these models. Also, there are different ways to host the models on the edge, optimizing the needed storage and compute capacities of the edge device. An intelligently designed pipeline of processing models ensures optimal compute usage without keeping the edge busy at 100% of its capacity.
Make the business case for AI-on-edge
Designing and optimizing the edge for your organization’s AI development has concrete business as well as operational benefits:
- Edge computing supports instant, AI-powered data analysis to make real-time decisions in response to changes taking place within the manufacturing plant.
- Edge computing increases the reliability of critical AI applications, because they are no longer dependent on cloud processing or connectivity. This capability is part of a strategy of continuous operation.
- AI at the edge can leverage compute and storage as needed for training models; and through low-code initiatives, create light-weight AI applications that require less resources.
Manufacturing IT leaders can act now to configure and optimize their edge computing infrastructure to enable effective AI deployments using Microsoft AI. Here are five steps to consider:
- Identify key use cases: Assess which AI applications will yield the maximum business when deployed in edge computing.
- Implement edge infrastructure: Map out what’s required in terms of resources and expertise to set up a scalable edge infrastructure for AI and integrate with existing systems.
- Enhance security: Develop a comprehensive edge security strategy that explicitly addresses AI-specific security issues.
- Train and upskill teams: Prioritize the training needed for internal IT and operational teams to manage edge computing infrastructure supporting AI applications.
- Pilot projects before scaling: Initiate pilot projects to test and refine edge computing plans before large-scale deployment.
The bottom line
IT leaders can speed up AI deployment at the edge by partnering with a systems integrator like Tata Consultancy Services (TSC), which has expertise, platforms, and services for edge computing and AI in partnership with Microsoft. By partnering with TCS, IT leaders can effectively harness edge computing and AI to optimize their operations, improve efficiency, and ensure the reliability of their AI applications.
To learn more about how TCS can help manufacturing IT leaders optimize the edge for AI, see Next generation manufacturing enterprise: powered by GenAI.