- Perimeter Security Is at the Forefront of Industry 4.0 Revolution
- Black Friday sales just slashed the Apple Watch SE (2nd Gen) to its lowest price ever
- Get an Apple Watch Series 10 for $70 off for the first time ahead of Black Friday
- The 15 best Black Friday Target deals 2024
- This fantastic 2-in-1 laptop I tested is highly recommended for office workers (and it's on sale)
IT leaders look beyond LLMs for gen AI needs
Putting new models to work
Labor-scheduling SaaS MakeShift is another organization looking beyond the LLM to help perform complex predictive scheduling for its healthcare, retail, and manufacturing clients.
“We were using LLMs for chat support for administrators and employees, but when you get into vector data, and large graphical structures with a couple of hundred million rows of inter-related data and you want to optimize towards a predictive model for the future, you can’t get anywhere with LLMs,” says MakeShift CTO Danny McGuinness.
Instead, MakeShift is embracing what is being dubbed a new patent-pending large graphical model (LGM) from MIT startup Ikigai Labs.
“We’re leveraging the large graphical models with complex structured data, establishing those interrelationships causation and correlation,” McGuinness says.
MakeShift joins companies such as Medico, HSBC, Spirit Halloween, Taager.com, Future Metals, and WIO in deploying Ikigai Labs’ no-code models for tabular and time-series data. Ikigai Labs — co-founded by Devavrat Shah, director of MIT’s AI and Data Science department, and Vinayak Ramesh — offers AI for tabular data organized in rows and columns. The company has doubled its head count in the past six months and scored a $25 million investment late last year.
Other types of multimodal models with support for video are also emerging for software services that rely heavily on computer vision and video, giving CIOs a raft of new tools to call on to leverage AI models suited to their specific needs.