- The Model Context Protocol: Simplifying Building AI apps with Anthropic Claude Desktop and Docker | Docker
- This robot vacuum and mop performs as well as some flagship models - but at half the price
- Finally, a ThinkPad model that checks all the boxes for me as a working professional
- Why I recommend this Android phone for kids over a cheap Samsung or Motorola model
- My favorite USB-C accessory of all time scores a magnetic upgrade
IBM to set up 'full stack' AI facility at university
IBM has unveiled plans to set up a “full-stack” tech infrastructure at a Singapore university to support research and development efforts in artificial intelligence (AI).
Located at the National University of Singapore (NUS), the new facility will be decked out with IBM’s AIU (AI Unit) accelerators and open-source AI models, including the tech vendor’s Granite large language models (LLMs), Watsonx data and AI platforms, and Red Hat’s hybrid cloud.
Also: IT leaders worry the rush to adopt Gen AI may have tech infrastructure repercussions
The center will look to support local academic and research institutions and businesses in their AI development efforts, according to IBM. The tech giant is touting the center as the “first such full-stack AI infrastructure” to be established on a university campus in Asia-Pacific.
NUS and IBM will also jointly develop products and methodologies that aim to build trust in AI. Their research areas will focus on green AI, safe AI, and domain-centric AI developments, said Tan Kian Lee, dean of NUS School of Computing.
Also: Train AI models with your own data to mitigate risks
The efforts will focus on designing and deploying scalable AI systems that consume less energy and compute and AI models with lower data requirements, Tan said, during a media briefing. He said the goal is to achieve these aims while maintaining or enhancing overall performance.
Tan said one potential research area is using smaller AI models, which require less computing resources and can sit on edge devices. The research center may also explore how software and hardware can be better aligned for optimal performance and intelligence.
When asked, Tan said no projects are currently focused on building tools to detect or combat deepfakes.
Instead, he pointed to other efforts to enhance AI safety and trust, to ensure training data is verified and factual, and to prevent data leaks. He said research here may include “machine unlearning”, which looks at techniques to scrub or remove sensitive or unsafe data from AI models, without needing to retrain these models again from ground zero.
Also: 5 ways CIOs can manage the business demand for generative AI
In May, IBM inked an agreement with AI Singapore (AISG) to test the latter’s Southeast Asian LLM and make it available for developers to build customized AI applications.
Under the partnership, IBM will test the Southeast Asian Languages in One Network (SEA-LION) model using Watsonx and work with AISG to finetune the LLM. The goal is to help organizations choose suitable AI models for their business requirements.