By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

Is Philosophy the Next LLM Training Frontier?

December 03, 2024

In this Q&A, two MIT researchers argue that philosophical concepts are “the pragmatic underpinning of optimal AI.”

As AI’s capabilities grow, defining and designing the frameworks guiding their development and behavior becomes an even greater human imperative. What should we teach AI to do? How should it be trained to act and respond? What values should be embedded in large-language models (LLMs) and why? IDE Visiting Scholar Michael Schrage and MIT Sloan Management Review’s David Kiron, have a theory: Philosophy will take AI to its next level.

It’s been more than a decade since venture capitalist Marc Andreessen observed that ‘Software is eating the world.’ Then, in 2017 Jensen Huang, the co-founder and CEO of AI chipmaker Nvidia, went further declaring, AI is eating software.

Schrage and Kiron’s work provocatively builds on Andreesen’s and Huang’s digital dining metaphor: Their research argues that “Philosophy is now eating AI.”

Successful AI training is inseparable from philosophical training — embedding enterprise-appropriate principles, purpose and ethics into learning frameworks, the researchers say.

Their studies of LLM — like ChatGPT and Anthropic’s Claude — suggest that the surest way to make these generative AI platforms more useful, valuable and relevant is to cultivate philosophical capabilities. Critical thinking is key, Schrage says, and he thinks that “re-reading Socrates, Rawls, Mill, Anscombe, Wittgenstein and/or Confucius will be central to boosting Return on GenAI investment.”

In fact, he expects philosophy-driven AI frameworks to become commonplace in business conversations “not just for Open AI, Anthropic, Google and Microsoft, but for every organization seeking to get transformative value from their GenAI investments and deployments.”

“In an era of both generative and predictive AI,” says Schrage, “strategic enterprise thinking requires critical thinking about your chosen AI philosophies.

Which philosophical imperatives will fine-tune customer engagements and employee reviews? Philosophy helps makes LLM responses more transparent, interpretable and explainable.

Moreover, Schrage and Kiron say, it will assure that GenAI advice and recommendations are more ethical and effective. Indeed, they predict that the most successful users of AI will be leaders who seriously embed their philosophical priorities into their models.

Schrage and Kiron expanded on their new ideas in an interview with IDE Editorial and Content Director, Paula Klein.

IDE: Most will agree that ethical, responsible AI is a touchstone for developers. Yet, you are proposing an even higher goal — philosophical AI. How do these concepts meld and overlap? Why do you see philosophy as the ultimate aim for AI success?

MDS: Bluntly, there’s too much emphasis on ethics at the expense of other foundational philosophical perspectives. Generative AI’s rise — and the power and potential of LLMs — means philosophy simultaneously becomes a capability, a sensibility, a dataset and an enabler for training and for gaining greater value from AI investments. Philosophy today is a mission-critical, strategic differentiator for organizations that want to maximize their return on AI.

IDE: Give us a simple and accessible framework for thinking about this.

DK: Philosophical perspectives on what AI models should achieve (teleology), how they know and represent what they know (epistemology), and how they represent reality (ontology) combine with ethical considerations to shape value creation.

There is a literal wealth of “philosophy patterns” already embedded in LLMs as part of their training. Philosophy also is part of the GPT corpus and the parameters that shape learning, prompting and fine-tuning. Critical thinking and philosophical rigor will get you better outcomes from both generative and predictive AI models. When designers prompt the model to think better, its responses prompt humans to think better. That’s a virtuous cycle that decision makers need to embrace.

IDE: The majority of AI developers — as well as business leaders — don’t think about philosophy on a daily basis. How will philosophy infiltrate AI software design going forward?

MDS: That’s not quite true….developers may not consciously or intentionally articulate their philosophies, but — as David’s response makes clear — there’s no avoiding questions of purpose, knowledge, semantics, aesthetics and ethics when you’re using AI to set up a supply chain, a CRM, an accounting system or an employee development program. We argue that LLMs and GenAI turn these tacit philosophical assumptions into explicit AI parameters for training, fine-tuning and learning.

Essentially, we believe our philosophy-eats-AI paradigm overturns almost every significant legacy element of software development. This goes way beyond Co-pilots and code generators to the fundamentals of what we want software to deliver.

Remember, we’re far beyond ‘just’ coding and development — we’re training models to learn and learn how to learn. What learning principles matter most? What do we want our models to ‘understand’ about customer or employee loyalty? What kinds of collaborators and partners do we want them to become for us and with us?

Barely five years ago, these questions were hypothetical and rhetorical. Today, they define research agendas by organizations that really want to get the best impact from their AI investments. We see this sensibility emerge in the interviews we’re doing and in the way smart organizations tell us they’re training and fine-tuning their LLMs. It’s not an accident that AI investors and innovators like Stephen Wolfram, Palantir’s Alex Karp and venture capitalist Peter Thiel have formal philosophical training and interests.

So, philosophy gobbling up AI isn’t semantic exercise or Ivory Tower musing. We see it as a strategic imperative, resource and differentiator for sustainable AI success.

Philosophy has already infiltrated the hearts, minds, codebase, training sets and datasets of every large and small language model development team worldwide — now, it’s time to harness that knowledge.

Image created by Anthropic’s Claude AI.

IDE: What are the potential benefits? Who stands to gain?

MDS: Our research — based on six years of global executive surveys and interviews with scores of executives in multinational companies — offers compelling evidence that philosophy-driven training and investments directly impact AI’s economic returns. What we don’t know is how diverse and divergent these approaches may be. That fascinates us.

We know that regulation and emerging public policies already see philosophical questions about purpose, accuracy and alignment with human values as fundamental issues to effectively training AI models. This doesn’t mean firms hire Chief Philosophy Officers to oversee AI capabilities and performance…yet. But decoupling philosophical training from AI training seems foolish and counterproductive. We see machine learning as more effective with philosophical training.

IDE: Discuss the perils i.e. to those concerned about personal and corporate privacy, overreach by governments and tech firms, and fear of AI in general?

DK: Those concerned about the fairness and equity of AI decisions and value-creation can consider this: LLMs can be tuned to emphasize Western moral principles that offer responses rooted in explainable utility and distributive justice. Alternatively, GenAI systems cultivated from Eastern philosophies, such as Taoism and Confucianism, would emphasize detachment and relational ethics.

Companies can choose from many available philosophical perspectives to improve their AI-driven outcomes. Leaders who want AI to advance their strategic outcomes need to become more effective critical thinkers about the philosophies that underpin their AI efforts.

Fundamentally, philosophy represents a powerful resource and method for transforming AI into a steward of humanity, rather than its threat.

Our point is that philosophy can cost-effectively elevate the power of enterprise AI to attain its human-designed objectives — and overcome biases. This potential has been largely unrecognized, undervalued and underappreciated. The big question is whether leaders will decide to use available philosophical resources for this purpose.

IDE: Walk us through an abbreviated scenario where developers are using a philosophical approach to machine learning training.

MDS: Remember, LLMs use rewards to shape training outputs and outcomes. What do we want to reward and why? You could, for instance, decide to train an HR chatbot to generate outputs that nudge employees to pursue a work-life balance program that is known to be successful for employees in similar circumstances. Drawing on the libertarian paternalism ethos of Sunstein and Thaler, developers could train the chatbot to offer specific suggestions and questions that direct users toward certain behaviors, while preserving the autonomy of users to pursue other courses of action.

IDE: What is the role of humans in this brave new world? What happens when AI-trained philosophies compete and disagree?

MDS: Philosophy’s ultimate AI impact might not be in making these intelligences more ethical or better aligned with current human values, but in transcending our current perceived limitations and inspiring new frontiers of understanding and capability. While I’m not predicting we’re going to see a — apologies to Ray Kurzweil — a Singularity 2.0 or 3.0, we will likely discover and uncover astonishing and unexpected insights about ourselves and our universe. By decade’s end, we’ll be getting philosophical insights and inspirations from nextgen LLMs that will shock and inspire people. That’s what happens when philosophy eats AI.

From left, the human thinkers, Michael Schrage and David Kiron.