Saturday, April 11, 2026

Follow up: Economies of AI

Yesterday, I gave a talk about my earlier post Economies of AI with supplemental material to illustrate the point (slides in Chinese). One addendum in my talk is that I referenced a book by George Polya, How to Solve It (1945), to illustrate the comparative strengths of AI and humans when it comes to problem solving.

Polya broke down problem solving in four phases:

  • Definition: what are the unknowns, how to collect data, what are the conditional limitations, can we derive the unknown from the limitations, and are there any logical gaps or contradictions?
  • Planning: are there known solutions, similar solutions, ways to break down the problem (by divide and conquer), can we solve a specialized problem and then generalize?
  • Execution: step by step, verify the correctness of each step.
  • Assessment: are the steps reasonable, are the results reasonable, other ways to solve the problem, applications of the solution to similar problems?

I argued that these principles of problem solving universally applies to any problem, but humans are uniquely qualified for problem definition and result assessment. The reason is that the motivation to solve a problem comes from humans, as AI does not have its own motivation nor intent. And as the beneficiary of the problem solving, only humans can judge whether the outcome works as intended, as AI has no means to gather real-world feedback. On the other hand, AI either excels or will excel at planning and execution.

I also made a quadrant to illustrate the division of labor based on value proposition and risk.

Low RiskHigh Risk
High ValueπŸ§‘ πŸ€–  πŸ­?πŸ§‘  πŸ€–   πŸ­?
Low Value πŸ§‘  πŸ€– 🏭 πŸ§‘   πŸ€–  🏭

In summary, humans should focus on work that has high value, AI should avoid work with high risk, and automation (deterministic algorithm) is typically relegated to low cost work (even though low cost does not necessarily mean low value).

Early in my talk, I also made the analogy of economies of scale using mobile phones as an example. At first, only a selected few can afford car phones, then more elites could afford cellphones but they are still bulky. Mobile phones became more prevalent and affordable with the indestructible Nokia 3310 even though functionality is still limited, and now everyone has a smartphone. I argue that AI right now is at Nokia 3310.

Audience questions

Q: Although humans should focus on high value work, what would happen to those who could not find high value work?

Economists know about the Pareto distribution which states that 20% of the causes achieve 80% of the outcome. You can see this in teams or group projects where a few people deliver most of the value. You also see this in nature where 40% of worker ants are idle, but these idle ants serve as reserve capacity. IT uses redundancy as backup strategy by following the 3-2-1 rule with 3 copies of data, on 2 different media types, and 1 stored offsite. These are examples showing that reserve capacity or redundancy is needed in any resilient systems.

Without the reserve capacity, the system will collapse under stress such as war or disasters.

Q: With AI or automation taking away low value work, how would a person develop the skills to perform high value work?

Instead of focusing on the planning and execution, education should instead focus on the design and assessment aspects of problem solving, the first and final phases according to Polya.

Q: In your economies of scale analogy, AI is currently like the Nokia phone. What would it take to become a smartphone?

Currently, users spend a lot of effort on prompt engineering (the problem definition phase of Polya) because the prompt has to be precise and without ambiguity to avoid AI slop (see I Tried to Kill Vibe Coding). AI could evolve to fill some of these definition gaps by making heuristic guesses about what the prompt might mean when a problem is stated. This can happen in the assessment phase whether any solved problems are relevant to the problem at hand. On the other hand, the assessment of the efficacy of the outcome can still only be done by the user.

No comments: