A Chat of GPT

Towing the AI wagon...

2153 4


On November 30, 2022, the tsunami known as ChatGPT struck. The excitement and hype are uncontainable, and the potential benefits are enormous.

Sam Altman is the CEO of OpenAI — the creator of ChatGPT and its Large Language Model (LLM).  Altman is the self-appointed evangelist of ChatGPT.  After all, he has been deep into it since 2015 and then handed it over for free to the world.  We owe him our gratitude and we have much to learn from him.

ChatGPT is a foundational Large Language Model — a generalized pre-trained LLM.

When we ask ChatGPT a question, it responds in a human-like way.  When I asked ChatGPT how it searches compared to traditional search engines, it replied, It generates responses based on the context and patterns it has learned during its training. It doesn’t actively search the internet for real-time information or conduct keyword-based searches like a search engine.”

Curious, I then asked more questions to learn precisely how ChatGPT works.  How does it learn?  How is it trained?  What I found is that ChatGPT learns to string words together with proper syntax and grammar merely by mimicking what it has gleaned from sentences on the internet — which is the data it was trained on.

Based on words and phrases in sentences that occur frequently, and their positions in the sentence, the model makes a higher probability choice on the aggregate, about what will come next.  Initially, ChatGPT does some learning on its own by drawing on internet data.  But then it is fine-tuned by humans, who tell the model which answer to choose from two alternatives that the machine suggests.  With iterative training, the machine becomes capable of suggesting better, more relevant, and more complete answers.

The ingenuity of this approach is the novel way of teaching machines, which do not know the syntax, the meaning of the question, or the answer it generates.  For centuries, parents, teachers, and coaches have insisted on the importance of comprehension and deeper understanding to improve knowledge — but machines learn without knowing anything about the subject matter.

LLM-based ChatGPT has enormous potential for improving human efficiency and productivity. Today it can suggest diagnoses to a doctor based on a patient’s symptoms, and it helps gather information but also for phrasing sentences for writers.  It exponentially improves the productivity of software engineers:  One software engineer reported that ChatGPT could write 80% of his code! Before long, ChatGPT will be approved for talk therapy, for much-needed mental health treatment.

However, ChatGPT does not always know the right answers. I double-checked the answers that ChatGPT gave me about how LLMs work against Sam Altmans interesting (but long) podcast interview with Lex Fridman.

  • It took billions of dollars for Open AI to bring us to where we are.  Microsoft committed to $10B in 2019 to have ChatGPT under its umbrella.  The definition of how profits will be divided between OpenAI, a not-for-profit entity, and Microsoft, a for-profit entity, is not so transparent.
  • Further, it will take a billion more dollars to just build an application and train the model in one special vertical, according to Altman, like reviewing the fairness of contracts in a meaningful way.  How will startups leverage this new technology?  VCs cannot provide this kind of funding, leaving only big techs in the game.  Big tech companies like Microsoft have the vision and can leverage the technology of ChatGPT to their bottom line through their own products. As a result, big tech companies will become megatechs.
  • The emergence of megatech companies will stifle free capitalism and GDP growth. It will shortchange customers by reducing competition, stifling product enhancement, and charging higher prices — called market power.  Just like when the US government filed an antitrust case against ATT (Ma Bell) which then broke into smaller and more competitive companies, reducing the cost of domestic long-distance phone calls from 14 cents or so per minute to a fraction of a cent per minute and then, with the rise of the internet, to free international calling.

Today ChatGPT can suggest better ideas than we can come up with on our own.  But making use of these ideas depends on human ingenuity.  The human brain is a far slower processing engine.  It takes an estimated 10,000 hours to master a skill.  The final bottleneck is not going to be the processing power to train LLMs but the slow processing speed of the human brain.

I also view Mr. Altman’s advocacy with skepticism.  There is a human element to this noble mission.  He claims there are only a few thousand engineers in the world who have the know-how to write the code to train the model — they are a rare breed.  But what if the model figures out how to manipulate human minds on its own, as Facebook did in the 2016 elections?  What if one or more from this elite group of engineers decide to manipulate human minds in their own self-interest?  A technology this powerful cannot be entrusted to the hands of a few.

In this article

See all Comments
Post Your Comment


  1. Sunil Suri

    Thank You VG.
    I appreciate your view point on the efficacy of ML.
    I think the combination of Ai & ML is very powerful and useful.
    I do think that Humans can train Machines but not Vice Versa.
    In that a Machine can be holistic and heuristic – is a clever use of human intellect.
    You say that Humans are slow to learn but a machine is faster at learning. So I am not sure if you view this as a negative or a positive.

    Our quest is to use Bots to address repetitive tasks at the speed of light. Humans cannot be this effective ever. But it was or is a Human who arms the Machine with sufficient knowledge to the how, the why, the when and improves its innate ability to never make the same mistake twice.
    A machine to address for example – predicting human behavior – is excellent. A human cannot be as discerning as a Machine based on Ai.

  2. Alka Jarvis

    Excellent, thought provoking article.

  3. Vijay Gupta

    There is a popular argument that general AI could become a threat to the future well-being of humanity. How is that possible? Well, in one scenario, AI could use the old and proven technique of divide and conquer.

    Historically, as hunter gatherers, humans lived in groups of about 100 people. In the age of agriculture, they lived in a joint family of about 20 people. In the industrial age, they moved towards a nuclear family of 4-5 people. And finally, in the information age, many people started living alone.

    However, initially, people living alone interacted with other people physically. But after the Covid pandemic, they started interacting with other people virtually. Continuing this trend, in the age of AI, many people will interact with machines only. This is when AI will brainwash these individuals into doing its bidding, some of which may not be good for humanity.


    Thank you, Vinita! I always learn something new when I read your articles. Have faith, everyone, "good" will again conquer "evil"! Hemant

Comments are closed.