- Why I recommend this OnePlus phone over the S25 Ultra - especially at this new low price
- I replaced my laptop with Microsoft's 12-inch Surface Pro for weeks - here's my buying advice now
- This palm recognition smart lock doubles as a video doorbell (and has no monthly fees)
- Samsung is giving these Galaxy phones a big One UI upgrade - here's which models qualify
- 7 MagSafe accessories that I recommend every iPhone user should have
IT leaders look beyond LLMs for gen AI needs

Putting new models to work
Labor-scheduling SaaS MakeShift is another organization looking beyond the LLM to help perform complex predictive scheduling for its healthcare, retail, and manufacturing clients.
“We were using LLMs for chat support for administrators and employees, but when you get into vector data, and large graphical structures with a couple of hundred million rows of inter-related data and you want to optimize towards a predictive model for the future, you can’t get anywhere with LLMs,” says MakeShift CTO Danny McGuinness.
Instead, MakeShift is embracing what is being dubbed a new patent-pending large graphical model (LGM) from MIT startup Ikigai Labs.
“We’re leveraging the large graphical models with complex structured data, establishing those interrelationships causation and correlation,” McGuinness says.
MakeShift joins companies such as Medico, HSBC, Spirit Halloween, Taager.com, Future Metals, and WIO in deploying Ikigai Labs’ no-code models for tabular and time-series data. Ikigai Labs — co-founded by Devavrat Shah, director of MIT’s AI and Data Science department, and Vinayak Ramesh — offers AI for tabular data organized in rows and columns. The company has doubled its head count in the past six months and scored a $25 million investment late last year.
Other types of multimodal models with support for video are also emerging for software services that rely heavily on computer vision and video, giving CIOs a raft of new tools to call on to leverage AI models suited to their specific needs.