Co-Intelligence: Living and Working with AI - Book Notes

Posted by Rhyd Lewis on October 21, 2024 · 11 mins read

I recently finished reading “Co-Intelligence: Living and Working with AI” by Ethan Mollick (published in April 2024). It’s a brilliant book and, whether you’re actively interested in LLMs, generative AI and all that jazz or just curious, I can’t recommend it enough. It’s an easy, informative and enjoyable read.

4 Rules of Co-Intelligence

  1. Always invite AI to the table
    • Experiment. It is much easier for individuals to innovate whereas harder for large orgs or groups to do (therefore make the most of AI as your personal assistant).
      • Workers who figure out how to make AI useful in their jobs will have a large impact.
        • Warning: use it as a tool not a crutch.
  2. Be the human in the loop
    • AI’s don’t know anything so use your judgement and expertise accordingly. Since they are predicting the next best token in a sequence, hallucination is a serious problem.
    • Being cognitively aware of this working pattern gives you a better chance of adapting to improvements down the line.
  3. Treat AI like a person but tell it what sort of person it is
    • Think of it as an alien person rather than a human-built machine
    • Establish a clear persona for the AI to break away from the generic answers produced by the default persona.
  4. Assume this is the worst AI you will ever use
    • As AI starts performing tasks once thought exclusive to humans, we will need to grapple with the excitement and anxiety that goes with that.

We are playing Pac Man in a world that will soon have PlayStation 6’s

AI as a…

Person

  • AI behaves unpredictably and doesn’t follow rules like normal software.
  • It’s designed to mimic human behaviour, making it difficult to distinguish from humans, but it’s not sentient.
  • To avoid misunderstandings, treat AI like a tool rather than a human entity, and consider its potential impact on our interactions.
  • He talks about how AI can be altered so that we feel more compelled to interact with it and this raises the risk of ‘perfect’ echo chambers. Given the impact of echo chambers within social media today, it’s not difficult to consider this might not be a good idea.

Creative

  • AI’s “hallucination” ability makes it prone to generating incorrect or fictional information, but this can be both a strength (e.g., suggesting new ideas) and a weakness (e.g., spreading misinformation).

You can’t figure out why an AI is generating a hallucination by asking it. It is not conscious of its own processes so if you ask it to explain itself, the AI will appear to give you the right answer, but it will have nothing to do with the process that generated the original result.

  • The underlying tech is designed to generate connections between unrelated tokens, making AI excel at creative tasks like art, music and idea generation.
  • AI’s creative output often requires interpretation and fact-checking, but can also be used for tasks that involve subjective judgement, such as marketing and performance reviews.

What makes hallucinations so perilous is not the big ones you can easily spot but the smaller, more plausible ones that slip through the net. This is the paradox of their creative ability. The same feature that makes LLM unreliable and dangerous for factual work also makes them useful

  • Despite its limitations, AI has the potential to greatly improve software development, data analysis, and summarisation, making it a valuable tool for many industries.
  • The ease of use and quality of AI-generated output pose a risk: while it may seem like a shortcut, AI can also lead to “slop” or decreased attention to detail in work that requires thought and repeated drafts.

Co-worker

  • Most jobs will overlap with the capabilities of AI, but this doesn’t necessarily mean they’ll be replaced; instead, AI can help with complex tasks and free up human workers for more creative work.
  • Three categories of task exist:
    • “just me” (no AI)
    • “delegated” (AI completes tasks you review)
    • “automated” (you assign tasks to AI without checking).
  • While AI is valuable as a co-intelligence tool, it’s essential to use it thoughtfully to avoid relying too heavily on it; certain tasks like writing in one’s own style are still uniquely human.
  • The future of work with AI is uncertain, and more research is needed to understand its impact; using AI as a collaborator can overcome lack of inspiration, too much procrastination and providing feedback on work.

Tutor

  • One-on-one tutoring is more effective than group tutoring therefore AI is likely to significantly impact education and fundamentally change the way we learn (and teach) if used correctly
  • The rise of AI has made preventing “cheating” nearly impossible, as it’s hard to distinguish between human-written and AI-generated work.
  • He draws a parallel between the introduction of AI today and introduction of the calculator in the 70s
  • Students will increasingly want to use AI as a tool, rather than simply as a source of answers; this will require us to define policies around its use in education rather than ignore it

Coach

  • Risk that experienced people use AI instead of helping less experienced people learn on the job is increasing. Could cause major training and skills gaps e.g. this is already happening with surgeons aided by robots. There’s no place for a junior surgeon to learn on the job so they hav
  • This could happen with other roles as AI automates more and more tasks
  • Deliberate practice builds expertise and direct feedback from an expert helps. AI may be able to create a better training system than we have today

- Future

  • We should consider AI as a weird alien mind, not sentient but can fake it well since it has been trained on vast archives of human knowledge (and on the backs of many low paid workers)

We — humans made up of water and chemicals — have managed to convince well-organised sand to pretend to think like us

  • We can no longer trust that what we see, hear or read was not created by AI

The author sees 4 possible futures:

Scenario 1: As good as it gets

  • Today’s AI = the best we’ll ever see (this seems unlikely?)
  • LLMs fail to reach full potential due to technical limitations (e.g., cost of running and training models).
  • Large language models (LLMs) will continue to improve, but at a relatively slow pace.
  • LLMs are likely to become more conversational and engaging, leading to increased user retention.

Scenario 2: Slow growth

  • The pace of AI advancement is slowing down (he compares it to the progress made in flat-screen TVs over time).
  • This slowdown allows for more careful planning and consideration of the potential impact of AI on society.
  • However, this stagnation also increases the risk of dystopian problems emerging as a result of AI adoption.

Scenario 3: Exponential growth

  • AI continues to grow at an exponential rate, leading to significant changes in various aspects of society.
  • A flywheel effect occurs with AI companies using AI to help improve new versions of AI
  • This rapid progress will be accompanied by new opportunities and challenges, including social isolation and the need for major rethinking of work and societal norms. AI becomes 100s of times more capable in the next 10 years.
  • Massive changes everywhere. We encounter novel problems e.g. social isolation emerging wherein people prefer to converse with AI than other people
  • New types of entrepreneurship and innovation become possible

Scenario 4: AGI

  • The development of AGI (Artificial General Intelligence) could lead to a fundamental transformation of human civilisation.
  • The possibility of achieving true sentience in machines poses significant risks. It’s unclear whether it will be aligned with human interests or become a threat to humanity.
  • It’s not clear that there is a straight road from today’s LLMs to AGI
  • A more nuanced approach to considering the implications of AI development is needed, focusing on both catastrophic and positive scenarios.
  • we might yet run into a eucatastrophe — I had to look this up… “a sudden and favourable resolution of events in a story; a happy ending” — where previously tedious work becomes productive and empowering

Epilogue

  • Decisions about technology should not be limited to a small group of people. Serious discussions need to start and soon. We can’t wait for decisions to be made for us, and the world is advancing too fast for us to remain passive
  • Just like our own minds, we cannot fully explain how LLMs operate
  • As we become more technologically advanced, it is ironic that we are being forced to answer deep human questions about identity, purpose and connections
  • AI is a mirror, reflecting back at us our best and worst qualities
  • As powerful as LLMs are, they are still but a co-intelligence, not a mind of their own

Header photo by Rasa Kasparaviciene on Unsplash