This is a cost-benefit analysis on using AI to solve problems and comparing how it fares with classical methods, e.g. deterministic algorithms or manual labor, and the cost of the creation of automation.
A fair warning: currently, LLM is not able to summarize this article correctly because of my unique perspective (example), as this article is not about the H-word at all. You should try to read it yourself. If you are impatient, at least read the first sentence of each paragraph and the conclusion.
AI vs. Deterministic Algorithms
An example of a deterministic algorithm is to compute an arithmetic expression like "1+2+4". There are well-known and efficient ways to compute it.
- First the string is tokenized: "1+2+4" → ['1', '+', '2', '+', '4']. This is called lexing.
- Then the string is organized into an abstract syntax tree: ['1', '+', '2', '+', '4'] → Plus(1, Plus(2, 4)). This is called parsing.
- Then the abstract syntax tree can be traversed recursively and the value is computed: Plus(1, Plus(2, 4)) → Plus(1, 6) → 7. This is called evaluation.
- Under the hood, a machine would compute the addition using logic gates called an Adder.
For AI to do the same, the tokenizing is similarly done by a deterministic algorithm, but the rest of it is done through many large matrix multiplications. The size of these matrices are much larger than the length of the input tokens, and matrix multiplications take \(\omega(n^2)\) time complexity. Logic gates for a Binary Multiplier is also much more complex than an Adder. Large matrices take up more memory space and more communication bandwidth to move the data.
Which is why a machine could make billions if not trillions of calculations a second, but it would take AI a few seconds to complete a single prompt. Not to mention the power consumption needed by AI is several orders of magnitudes greater than a deterministic algorithm.
This is why for the problems for which we have a deterministic algorithm, it would not make economic sense to use AI to solve these problems. Furthermore, it would be AI's best interest to offload any such prompts to a deterministic algorithm. AGI may be an academic interest, but it is not economically viable for doing mundane tasks. Just like we would not be hiring humans to crunch numbers anymore once computers became commonplace.
AI vs. Manual Labor for Doing the Work
To achieve economic parity, AI would have to be relegated to the odd jobs—the long tail for which no deterministic algorithm exists. For these odd jobs, a person should try to do it first before trying AI. This is for two reasons: once they have done the job themselves, they have a better understanding how to write the prompt; and they will be in a better position to evaluate whether AI is doing the job correctly. Skipping this step is a common reason for getting AI slop. It is not necessarily the fault of the model when the prompt itself is sloppy.
If an odd job is truly one-off, it may make sense to do it only manually because the cost of learning by doing is comparable to the cost to understand how to write the correct prompt. When doing things manually, we gain insight about any potential problem, and then adjust the assumptions, requirements or expectations to avoid these problems. AI is unlikely to challenge the assumptions made by the prompt unless specifically asked. We wouldn't know what to ask for unless we are already aware of the problems. We wouldn't be aware of the problems unless we tried to do it ourselves. So just do it first. When the job happens again, then offload it to AI. This weird trick of DIY-ism will save you tons of time writing prompts, perhaps counter-intuitively.
AI vs. Manual Labor for the Creation of Automation
When these odd jobs become frequent, it then makes sense to invest in the time to automate them by creating a deterministic algorithm and writing programs. Traditionally, a human would write the computer programs for these algorithms. AI could presumably write them now, but I argue that the economy impact difference is minimal between the two. The reason is that whatever the cost is to develop software, the cost is amortized over the many jobs it ends up automating. Even though the one-time development cost may be expensive, it becomes negligible if you spread the cost over many jobs. AI may be 10x more productive than humans for writing programs, but 10% of negligible is still negligible.
What is not negligible is the cost of poorly designed automation, which has a multiplicative effect on the defects of the outcome. The defects can be the incorrectness of the output, or the inefficiency in the algorithm itself usurping too much resources or taking too long. If the algorithm is poorly designed, then it would screw up over many jobs, and the expense to clean up the mess is the polar opposite of negligible: it would be astronomical. It doesn't matter whether the algorithm is designed by a human or AI.
When it comes to the creation of automation, use whatever tool at our disposal to design an algorithm that reliably achieves the correct outcome and can do it efficiently. Even if AI is not able to vibe code a project from start to finish, it can still be a valuable tool for humans to learn about the nature of the problem through prototyping.
Divide and Conquer
So far, we treat the problem as a monolith. In reality, a problem can be broken down to many subproblems. It is like when computing "1+2+4" we compute one addition at a time, either:
- Leftist: (1+2)+4 = 3+4 = 7
- Rightist: 1+(2+4) = 1+6 = 7
And there is more than one way to break down the subproblems. The ability to decompose problems also gives rise to efficient algorithms known as divide and conquer algorithms, and in many cases it can be proven that this is the optimal way to solve a given class of problems.
When discussing AI's economic proposition, we should remember that many bespoke problems can be reduced to subproblems that are recurrent and can be solved at a greater economy of scale than if we considered each problem in isolation.
For example, car builders would design common parts, e.g. the engine and chassis, that can be reused across multiple models of sedans and SUVs. These engines and chassis are built out of common parts like standardized screws, nuts and bolts. Greater economy of scale is achieved by using common off the shelf parts, even if the end product is bespoke.
In the same way, we can mix AI, deterministic algorithms, and even manual labor in different configurations to achieve economy of scale.
Value Proposition
Another issue we neglected is the value proposition of the outcome of the work. In the pre-computer ages, human calculators were used for extremely high value work even though they are slow and error prone, from artillery in a battle that increases the probability of winning the battle, to scientific calculations that raced to create atomic weapons that ended World War II. Or they are employed for the backbone of economy itself, such as finances and accounting.
When computation became so cheap, they are used for entertainment like video games or watching cat videos.
Similarly, the method for which we use to solve a problem—manual, AI, or automation—speaks nothing about the value of the work that employs them. When it comes to high value work where the stake of failure is high, AI will still face a fierce competition with automation and human ingenuity, in part because of AI's high error rate. On the other hand, when AI is used to generate videos for entertainment, who cares if the video shows someone with seven fingers, or if the text is malformed, provided the entertainment value is good? There is no stake in these failures.
When company management makes the decision to replace work with AI, it is a signal that they consider the value proposition of the work to be low. They could be proven wrong by the market or the competition. Indeed, competition is a remedy for Enshittification, and we need Anti-Trust enforcement to ensure competition. I'm not sure if labor protection helps, since it enables complacency, not ingenuity.
Conclusion
We reach a conclusion where the economic viability is unsurprisingly dictated by the economy of scale.
- High volume work should be done by a deterministic algorithm, not AI.
- Low volume work could be done by AI, but humans should do it first so they can understand the problem better, for writing better prompts and for evaluating the efficacy of the output.
- One-off work should be done by humans first to understand the problem.
When deciding which problems are high volume, low volume, or one-off, we should use a divide and conquer approach and break bespoke problems down to reusable and recurring subproblems, so we can achieve greater economy of scale. Again, this should not be a surprise for economists. If anything, computer science just provides the vocabulary to explain why the economy of scale is achievable.
We also came to a conclusion that AI slop is enabled by enshittification, and the remedy is more competition through Anti-Trust enforcement; this is also unsurprising for economists.
The more sober minded person will come to realize that AI is just one more way to get things done, and it is still subject to the same market forces as everything else. Commodified work will eventually be replaced by deterministic algorithms, not AI. High value work will still face competition from human ingenuity, unless the human chooses to be complacent or if our values somehow become corrupt.
That last point that our values have somehow become corrupt is my greatest fear.