Of Kind Chess and Wicked Programming: How AI Influences Our Creativity

Magnus Carlsen announced in a podcast that he would no longer defend the title he’d won in 2022 after consecutively remaining undefeated in the World Chess Championship since 2013. Many critics speculated he had lost his passion or was too afraid to lose to the next generation of players. About two years later, in March 2024, he announced the inaugural tournament of the Freestyle Chess Grand Slam Tour—a series of five major Chess960 tournaments held across different continents throughout 2025.
Chess960, also known as Freestyle Chess or Fischer Random, radically increases the possible combinations of the starting positions in a game by randomizing the positions of the back-rank pieces, leaving the front-rank pawns untouched. As the name suggests, the number of possible starting positions is increased to 960 possible combinations. The Freestyle Chess Grand Slam Tour, though, bans the traditional Chess starting position, including swapping the King and Queen.
Clearly, the motivation behind the Freestyle Chess Grand Slam Tour is to make chess more interesting and entertaining in the hope of drawing in a larger audience. This is reflected by the fact the tournament offers a bigger prize pool than traditional chess tournaments and features an innovative player-focused gameplay approach, such as the “confession booth.” Players are also equipped with heart rate monitors to provide “a novel layer of drama,” as Carlsen puts it.
I’m no chess fan, but this seems like a good revitalization of the ancient game, sparking enthusiasm and debate amongst the sport’s fans and grandmasters alike (though the first tournament of Chess960 dates back to 1996). It is definitely pertinent now with how AI has crept into the sport over the last three decades, starting with a brute-force approach of Deep Blue by IBM in 1997, to the use of artificial neural network and reinforcement learning pioneered by Google DeepMind with AlphaZero in 2017. It is no surprise then of AI’s increasing influence on how the game is played. Magnus Carlsen wrote in The Economists for the By Invitation section:
Why freestyle? Following my fifth consecutive victory in the World Chess Championship in 2022, I announced that I would no longer defend the title. Many speculated that I was exhausted, or that I was scared of the next generation of players. On the contrary, my passion for chess remains as strong as ever, and I am as ambitious as I’ve always been. What changed was my perspective on the format of the classical world championship itself.
The challenge wasn’t the games, which often stretch for hours. I enjoy the length of time allowed under the rules. My title defence in 2018 against Fabiano Caruana, for instance, pushed us, over 12 drawn games, to our mental and physical limits, before I emerged victorious in the tiebreak. The issue lay elsewhere, in the months of grinding preparation leading up to the event. Modern World Chess Championships demand endless memorisation of computer-generated opening lines, reducing the sport’s artistry to rote learning. As someone who treasures the creativity of chess, I wanted to focus more on this aspect of the game. Also, life beyond chess deserved my attention too.
Saving Creativity
In a way, I share Carlsen’s sentiment. Chess960 was originally introduced to reduce the emphasis on opening preparation and encourage creativity. By moving to freestyle, player decision-making in chess may be free from the need to internalize patterns over years of grinding to gain a competitive edge. Going freestyle is how the grandmasters strike back against AI with a renewed hope for creativity in the sport.
Should the same thought be entertained about programming? Programming, after all, is an art. It is a skill that’s improved with creativity rather than mere rote memorization of patterns of codes (though having some repertoire of knowledge on software design patterns does help). Many prominent programmers have expressed similar sentiments on the importance of taste, elegance, and beauty with regard to programming. Here’s how Linus Torvalds, the original hacker behind the Linux kernel and Git, explained good taste in 2007:
To me… the sign of people I really want to work with is that they have good taste… Good taste is about really seeing the big patterns and kind of instinctively knowing what’s the right way to do things.
And what are instincts but the use of implicit patterns so deeply familiar that we no longer remember learning them after years of repetition and experience—embedded deep in the back of our mind? If this is true, then AI is just a monstrous instinct machine that has learned to recognize the big patterns of our collective knowledge that we’ve put out on the Internet. It doesn’t truly understand the right way to do things, but it can infer them from the big patterns—and it will only get better.
To that point, AI has already internalized a wealth of creative output of perhaps some of the best programmers in the world and makes them readily available as generative code assistants. They help us scaffold code from a blank page, suggest code improvements for refactoring opportunities, and even fix our code plagued with cryptic error messages. If this effectively takes away the creative aspect of programming from the programmers, then will the programmers too, strike back?
The problem with this line of thinking is that chess is an adversarial zero-sum activity (one must lose for another to win), while programming—like many productive economic activities—is a positive-sum activity (everyone wins something). So while AI takes away the creative value of chess from the players (by eating away their creative decision-making opportunities), AI adds creative value to the programmers (by adding more options for creative decision-making opportunities). It makes no sense to strike back. We’re living the dream!
Exploiting the Kind and Capitalizing on the Wicked
There’s also another interesting angle of AI’s influence on chess, and it is related to learning and decision-making. Carlsen’s issue with the overreliance on rote learning computer-generated patterns and his desire to revive the creative aspect of chess reminds me of two thought-provoking ideas introduced by Robin Hogarth about how we learn and make decisions: the kind learning environment and its antithesis, the wicked learning environment.
To frame the two antithetical learning environments into questions: Does our decision-making improve with more experience? Or does having more experience make us myopic to the broader worldview, making us more susceptible to being biased in our judgment?
If your belief leans more to the former, then you’re in the camp of expert intuition who believes that good decision-making comes with years of repeated practice and experience. Otherwise, if you’re skeptical that having more experience is a good predictor for excellent decision-making, then you’re in the camp of heuristics and bias, who believe that experience leads to overconfidence and results in more decision errors.
A seminal paper titled Conditions for Intuitive Expertise: A Failure to Disagree reconciles the opposing views of these two camps. The two authors, Daniel Kahneman (camp heuristic and bias) and Gary Klein (camp expert intuition) touched on their differing perspectives and their reconciliation:
In this article we report on an effort to compare our views on the issues of intuition and expertise and to discuss the evidence for our respective positions. When we launched this project, we expected to disagree on many issues, and with good reason: One of us (GK) has spent much of his career thinking about ways to promote reliance on expert intuition in executive decision making and identifies himself as a member of the intellectual community of scholars and practitioners who study naturalistic decision making (NDM). The other (DK) has spent much of his career running experiments in which intuitive judgment was commonly found to be flawed; he is identified with the “heuristics and biases” (HB) approach to the field.
A surprise awaited us when we got together to consider our joint field of interest. We found ourselves agreeing most of the time. Where we initially disagreed, we were usually able to converge upon a common position. Our shared beliefs are much more specific than the commonplace that expert intuition is sometimes remarkably accurate and sometimes off the mark. We accept the common-place, of course, but we also have similar opinions about more specific questions: What are the activities in which skilled intuitive judgment develops with experience? What are the activities in which experience is more likely to produce overconfidence than genuine skill?
And so the answer to the above questions is that it depends. Ask a chess grandmaster, and you may get a nod of agreement. Ask a senior programmer, and he may disagree vehemently (simply because programmers are an angry bunch). What sets them apart is the domain in which their decision-making is applied. One lives in a world where the rules seldom change, and the other has to adjust and adapt frequently to meet the demands of the rapidly shifting world (and gets angry every time a new feature is requested “at the last minute,” or a change introduced an unexpected behavior).
Chess is considered the gold standard of a kind learning environment. It has simple and predictable rules and provides immediate feedback, and yet it is quite complex thanks to the astronomical size of the possible positional configuration of the pieces on a 8x8 board. (For traditional chess, a recent estimate is in the order of $~10^{44}$ reachable legal positions.) On the contrary, a wicked learning environment is the opposite: the rules are unclear and complex and may change overnight. Feedback in the environment is often delayed. And what you learn today may not be applicable tomorrow. These are the defining features of problems faced by any white-collar workers—and sure is true for programmers!
The table below summarily compares the two:
Kind Learning Environment | Wicked Learning Environment |
---|---|
Provide clear, fast, and accurate feedback | Provide misleading, slow, or ambiguous feedback (if at all) |
Has rules and patterns that are consistent | Has ambiguous and changing rules |
Easily learnable with repeated experiences and trial and error | A lot of experience and trial and error does not lead to better judgement |
In a kind, zero-sum environment like chess, the incentive is to use AI’s super computational capability to exploit the solution space—commonly through “prepping”—to devise optimal opening strategies and gain competitive leverage over competitors. This effectively shrinks the space for human creativity, turning expertise into a matter of rote memorization rather than tactical ingenuity. In contrast, in a wicked, positive-sum environment like programming, the incentive is to capitalize on existing knowledge and tools using AI to accelerate discovery and expanding our horizons. Here, AI doesn’t limit our creativity; rather, it enables us to explore and traverse more complex solution spaces that were previously beyond our reach. In other words, AI’s interaction with the kind environment reduces the creativity space for humans; but it expands the creativity space for humans in a wicked environment.
Though in this post I limit myself to only use chess vs. programming to drive my musing, I’m sure this observation rings true to many other domains as well. As I’m writing this post, another example that came to mind is that of F1 (a kind, zero-sum environment) vs. forecasting long-term stock prices (a wicked, positive-sum environment).
In the case of F1, teams uses AI to exploit the useful patterns from real-time data to help in making fast pit strategy decisions, a job which traditionally relies on a team of engineers to make the call. This narrows down human choices to those already exploited by AI. As for traders interested in the long-term future price of stocks, the logical thing to do is capitalize on the wealth of financial and economical knowledge—a job better suited for AI than human—to help them make better decisions.
Well, does this mean AI’s involvement in chess—or any other kind environment—is bad? Not really. While it is generally true that in a kind environment AI reduces the number of available choices to go for optimizations, it also shift the creative frontier downstream into regions of small pockets of wicked environment within the domain. This is apparently Magnus Carlsen’s play style: by going for offbeat opening choices, he embraces ambiguity and face it with ingenuity. By steering away from exploited openings, he’d purposely land himself and his opponents into small pockets of wicked environments in chess—in the middlegames and the endgames—where creativity triumphs over rote memorization.
Creativity is About Knowing Where to Aim
With the advent of AI assistant tools like GitHub CoPilot and Cursor, I’ve come across many claims about the impending irrelevancy of programmers in the future where AI reigns supreme. In the mean time, the programming culture spawn yet another subculture, called vibe coding—where the main language for software development is English, and codes generated by AI are not meant to be fully understood (if you do review, understand, and test the code, you’re not vibing).
Perhaps one of most vocal leader who shared this thought is Jensen Huang, the CEO of NVIDIA. In 2024 last year at World Government Summit, he mentioned something about the future of programming to the chagrin of many programmers (myself included):
I’m going to say something, and it’s going to sound completely opposite of what people feel.
You probably recall, over the course of the last 10 to 15 years, almost everybody who sits on a stage like this would tell you it is vital that your children learn computer science. Everybody should learn how to program. In fact, it’s almost exactly the opposite: it is our job to create computing technology such that nobody has to program, and that the programming language is human.
Everybody in the world is now a programmer.
This is the miracle; this is the miracle of artificial intelligence. For the very first time, we have closed the gap. The technology divide has been completely closed, and this is the reason why so many people can engage with artificial intelligence. It is the reason why every single government, every single industry conference, and every single company is talking about artificial intelligence today.
His speech would ring true, if only programming is only constrained to the problem of writing code to make the interpreter or compiler happy. But programming is more than just about generating code. Software needs to be useful in order for it to continue to exist, and much of the input to the activities that generate real values in the software product requires humans. While writing code is an important part of the business, it is not complete in itself to deliver real software value. To give some examples, think software architecture and domain-driven design—both of which require very hard-to-master skills (especially as programmers), like developing business acumen, making architectural decisions, and making complex topics stick.
We should also be aware of the fallacy of thinking that the programming domain is a zero-sum game: That for one party to win, another must lose; that if AI dominates programming, then the programmers are pushed out of job. This worldview disregards the fact that programmers and software developers operate in a collaborative positive-sum world. In fact, as AI joins us in this positive-sum world, it is just so that AI needs to capitalize on human programmers’ inputs for its outputs to be useful, and in turn, human programmers need to capitalize on AI to remain relevant.
This doesn’t mean I completely disregard the notion of AI replacing human programmers in the future. The key difference between the AI we’re using right now and the one that Jensen Huang portends is in their capacity (or lack thereof) to be a manager instead of the individual contributor; to be an architect instead of the builder; or, to borrow Ben Thompson’s analogy, to be an Artificial Super Intelligence (ASI)—the rifle barrel—instead of Artificial General Intelligence (AGI)—the ammunitions:
What o3 and inference-time scaling point to is something different: AI’s that can actually be given tasks and trusted to complete them. This, by extension, looks a lot more like an independent worker than an assistant — ammunition, rather than a rifle sight. That may seem an odd analogy, but it comes from a talk Keith Rabois gave at Stanford…My definition of AGI is that it can be ammunition, i.e. it can be given a task and trusted to complete it at a good-enough rate (my definition of Artificial Super Intelligence (ASI) is the ability to come up with the tasks in the first place).
In this perspective the future of programming is certainly bright: We may not be too far from a future where we can “hire” cheap individual contributors—the ammunitions—to help generate 100% of the code. But ammunitions are only useful when we—the rifle barrel—point them to the desired targets. It gets trickier in a wicked world, where the targets would often dance unpredictability. Often times, it is even unclear which targets we should point to. In other words, creativity is about knowing what and where to aim; simply being the projectile isn’t!
In an era where AI reigns supreme, the game we must learn to play will be that of unpredictability, chaos, and ambiguity. It is no longer about perfecting the established art, but mastering the art of exploration and experimentation. And if history is any guide, humanity has always been remarkably good at just that.