AI developers should be philosophers as much as technologists


Jose A. Bernat Bacete/Getty Images

Artificial intelligence (AI) always delivers surprises. But as adoption matures, we may be in for the biggest surprise of all. More so than the latest-and-greatest technology, reliable data, and thorough training, philosophy may matter more than anything in developing AI systems, according to MIT researchers.

Also: 68% of tech vendor customer support to be handled by AI by 2028, says Cisco report

“Software is eating the world, AI is eating software, and philosophy is eating AI,” according to Michael Schrage, research fellow with MIT’s Initiative on the Digital Economy, and David Kiron, editorial director for MIT Sloan Management Review, who expounded on this ultimate differentiator in AI in a recent podcast and related article published in MIT Sloan Management Review. 

The two disciplines of AI and philosophy might seem like polar opposites, but Schrage and Kiron argued that one can’t function without the other: “When implementing AI, most organizations obsess over the technology, but our research reveals a surprising truth. Philosophy is what truly determines AI success.”

Also: Can you build a billion-dollar business with only AI agents (yet)? This author thinks so

Using philosophy with AI doesn’t mean incorporating the views of Aristotle or Immanuel Kant, although the researchers said this approach could help. Instead, they said AI should closely reflect the driving philosophies of organizations, such as delivering above-and-beyond customer service or disrupting an inefficient industry.

“Regulation, litigation, and emerging public policies represent exogenous forces mandating that AI models embed purpose, accuracy, and alignment with human values,” the researchers stated. “Deliberately imbuing LLMs with philosophical perspectives can radically increase their effectiveness.” 

Also: Most AI chatbots devour your user data – these are the worst offenders

A company’s philosophy needs to be injected into “training, tuning, prompting, and generating valuable AI-infused outputs and outcomes,” they explained. Philosophies are the ultimate disrupter when they infiltrate “the training sets and neural nets of every large language model worldwide.”  

The researchers said ethics and responsible AI are just a “small part” of the picture. For example, a company’s philosophy may be focused on wowing the customer with over-the-top experiences to create not just buyers of its products and services, but super-loyal advocates

Few companies beyond Apple, Disney, and Starbucks, guided by their well-documented philosophies, achieve such rabid loyalty. “Customers become advocates, they become champions, they become defenders,” Schrage said in the podcast. “Can you track those elements and aspects of evangelism and defense and sharing communication that they celebrate?”   

However, AI is often applied to dry metrics in mainstream companies that are detached from a sense of connection, such as “measuring loyalty with metrics that serve as quantitative proxies and surrogates, to optimize RFM (recency, frequency, and monetary value), churn management, and NPS (net promoter score).” Such superficial metrics are “philosophically decoupled from thoughtful connection to customer loyalty, customer loyalty behaviors, and customer loyalty propensities,” the researchers said. 

Also: Can one person build a billion-dollar company now with AI agents?

While language models “excel at pattern recognition and generation to produce sophisticated outputs based on their training, organizations need AI that goes beyond superior prompt-response performance,” they opined. “Agentic AI systems don’t just process and generate language, they contextually understand goals, formulate plans, and take autonomous actions that should align with enterprise values.”

Starbucks, in particular, provides a sterling example of how corporate philosophy gets built into its AI systems. The company “did not simply deploy AI to improve performance on a given set of metrics,” Schrage and Kiron pointed out. “The senior team at Starbucks developed the ‘Deep Brew’ AI platform to promote what they considered to be the ontological essence of the Starbucks experience: fostering connection among customers and store employees, both in store and online.”

The first step in imbuing corporate philosophy is through what the researchers call “responsibility mapping,” Schrage said in the podcast. Responsibility mapping includes asking questions such as: “What do we want our software to learn? What do we want our AI to learn? What do we want our agents to learn?” Schrage explained. “This is where you have a marriage of the technical capability with the philosophical need and the business purpose.”

Also: The best free AI courses and certificates in 2025 – and I’ve tried many

Schrage and Kiron identified four transitions that are part of AI’s philosophical picture:

  • From passive information processing to actively constructed and validated knowledge: For example, “A supply chain AI with strong epistemological training doesn’t just predict disruptions based on historical patterns; it proactively builds and refines causal models of supplier relationships, market dynamics, and systemic risks to generate more nuanced and actionable insights.”
  • From pattern recognition to systemic insights: “An AI managing retail operations shouldn’t default to optimizing inventory based on sales patterns — it understands how inventory decisions affect supplier relationships, cash flow, customer satisfaction, and brand perception.”
  • From task execution to purposeful action: In marketing AI, “rather than optimize clickthrough rates, it pursues engagement strategies balancing immediate metrics with brand equity, customer lifetime value, and market positioning. This shift from outputs to outcomes highlights the purpose of purpose.”
  • From rule following to autonomous moral reasoning and ethical deliberation in novel situations: “This goes beyond simple rules or constraints — it’s about installing sophisticated frameworks for evaluating implications and making principled decisions in unprecedented situations. As with all responsible AI models, agentic AI needs its ethical choices to be transparent, interpretable, and explainable.”





Source link

Leave a Comment