Text Size:
Happy New Year
Happy New Year! As I write this, I am traveling with my NYU Stern Tech MBA class on “Tech Innovation” in Seattle and the Bay Area. I did the same trip a year ago when I published my newsletter called “The Next Tech War,” referring to the AI war sparked by Microsoft’s investment in OpenAI, which was announced when my class visited Microsoft’s headquarters here last January.
I’d like to say that I am touched by the warmth I feel from the NYU alums hosting us at the tech companies as they fondly recount their days at Stern. We must be doing something right. I want to extend a special thanks to Jeff Teper and Lili Cheng at Microsoft, Salim Kouidri at T-Mobile, and Alex Mical and Meredith Bunche at Amazon. Makes me proud to be a “Sternie.” I am looking forward to the visits ahead of us next week in the Bay Area. I want to express a special thanks to Sarah Ryan for making everything hum.
My Most Recent Podcast
Kicking off the new season of Brave New World is Mohit Satyanand, fellow world hiker and mountain lover, investor, and a policy and economy analyst. He’s a rolling stone who gathers moss along the way thanks to his sticky fingers.
We talked about how India stacks up as an investment opportunity and covered a lot of ground in the process, including India’s state of digital and physical public infrastructure, law enforcement, public health, education, and its environment. Mohit digs into each of these in our free flowing conversation, so check it out.
Mohit’s last name translates into something like “Truthful Bliss,” so it’s an interesting coincidence that I read Gandhi’s book “My Experiments With Truth” over the holidays. It prompted me to contrast how Gandhi’s concept of truth compares with truth in AI.
Mahatma Gandhi conjures up imagery of a skinny old man walking long distances in a loincloth, urging non-violence and passive resistance against British occupation. Volumes been written about him. But hearing him describe his evolution in his own words before the entire world started referring to him as a Mahatma is special. The picture above was taken at the Tolstoy Farm in South Africa where Gandhi spent many years as a young man. He is seated left of center.
Gandhi describes his unremarkable youth, but always driven by an unwavering commitment to the truth. He considers non-violence as a necessary condition for arriving at the truth in human affairs, especially justice. His views on non-violence were shaped by Leo Tolstoy, with whom he communicated extensively in the early 1900s. Gandhi writes: “there is no other God than Truth…and a perfect vision of Truth can only follow a complete realization of Ahimsa (non-violence).” An eye for an eye blinds truth.
Gandhi illustrates this practice by way of an example in South Africa. On one occasion, he was badly injured by a white lynch mob in Durban who opposed his candidacy to legally represent the persecuted Indian community. When the Secretary of State for the Colonies offered to press charges against his assailants, Gandhi refused to seek retribution, saying “I am sure that, when the truth becomes known (in other words, that law must be blind to race), they will be sorry for their conduct.” His expectation was that retaliation would only escalate violence, while his decision not to pursue them would cause them to reflect and thereby arrive at the truth, namely, that justice is meant to be color blind. The strategy ultimately worked to his advantage. As he recounted later, “My refusal to prosecute the assailants produced such a profound impression that the Europeans of Durban were ashamed of their conduct.” His reputation grew.
The book is an easy read and gives the reader a deep dive into the simplicity of Gandhi’s logic, and what he means by “truth” in every context from accurate record-keeping to law. He writes about experimenting with things like booze and meat as a teen, but how his “inner voice” would always redirect him towards what was right. He went to England at the age of 18 to study law. He was deeply introverted and embarrassed by his poor English, and rarely spoke. He couldn’t afford the bus, so he started walking many miles across London every day, and discovered that it kept him healthy.
Over time, his inner voice grew stronger. It made self-deception impossible. Every time he fooled himself, his inner voice would guide him unswervingly towards the truth. He writes how the quest for truth became a striving for perfection, whose pursuit is its own reward.
The book raised a number questions for me. Are people becoming less or more truthful? Why are people in some countries more truthful than in others? Curiously, there’s also some evidence that countries with higher mistrust among individuals show higher trust in their governments. Why is that? How important is the pursuit of truth to societies? Should truth figure prominently in the mission statements of universities and primary education? In my conversation with moral philosopher Peter Singer, he mentioned that some states have incorporated ethics into school curricula, for which Gandhi’s book, My Experiments With Truth, would be valuable reading.
Truth in Artificial Intelligence
Should we expect AI to be truthful? It’s a really important question these days as AI becomes a fabric of our lives.
People seem surprised that GPT hallucinates. It can make stuff up. People expect it to be truthful. As Geoff Hinton describes, it’s like an alien species has arrived on Earth and speaks such good English, that we’re having a hard time taking it all in. How can it lie so fluently?
Interestingly, the concept of truth has been always central to Artificial Intelligence. Truth and logic featured heavily in AI when I got into the field many years ago. Human experts provided the knowledge which served as the “ground truth,” and the AI orchestrated its use through the rules of logic or other attention mechanisms to stay on firm footing.
But logic alone is insufficient to model intelligence, as my podcast guest Sam Bowman recounted in an earlier episode of Brave New World. It is much too complex and contextual to be specified top-down. Indeed, AI has gone through several paradigm shifts as we’ve developed machines that can see, hear, and understand language based on models created purely from data. But in the current era of machine learning, where machines learn largely through self-supervision using data with varied degrees of veridicality, it is difficult to tell what the machines have learned and how truthful they will be.
For example, what’s the ground truth for current AI machines such as LLMs? While they’ve learned lots of things from their vast training data, they’re not designed for truth. Rather, they are only designed to “sound right” in every context by doing next word predictions that are linguistically coherent and tend to make sense, but are not necessarily true.
The ability to discern truth from falsehood will require a degree of consciousness which is lacking in AI machines at the moment. Years ago, machines with consciousness would have been unimaginable. Today, one wonders whether their creation is only a matter of time.
(This article was first published in SubStack.)