Should I delegate that to AI?
As mentioned in the previous post, I currently have mixed feelings about generative AI tools (that we’ll refer to simply as AI for the rest of this post): I think these are very impressive tools, that can drastically enhance one’s efficiency and can already handle a wide range of tasks single-handedly or with minimal guidance. As mentioned in the previous post, I am also well aware of the limitations of these tools, and the dangers (on the AI users, as well as society as a whole) of using them blindly.
Even though some people seem to think that total refusal of AI is the only way to deal with these dangers, I’m not one of them. I believe a healthy way to address these dangers is to acknowledge the capabilities of these tools without overhyping them, and to illustrate how they can be used safely.
In this post, I decided to share the flowchart I use when I have something to do, and need to decide whether I should delegate it to AI. Something can be a work-related task (a feature to develop, a pull request to review, an investigation to conduct, a prototype to build, documentation to update, etc.), or a side-project, or any other task that has a tangible output (e.g. code, review comments, investigation report, documentation, etc.); and I have to decide whether I will be the one producing this output, or if I can delegate it.
There are some other good use-cases for AI, such as brainstorming or AI-assisted research, where AI and human can work in pair. I think using AI for these use-cases is always a good idea if you remain critical of AI’s suggestions. I won’t cover these use-cases here, and will focus on the delegation issue.
Let me share the flowchart first, and elaborate later on how I came up with this.
flowchart TD
start(I have <em>something</em> to do) --> is_confidential
is_confidential[Is it confidential ?] -- no --> is_urgent
is_urgent[Is it urgent ?] -- Yes --> is_exact
is_exact[Would I get in trouble if the output is incorrect?] -- Yes --> is_reviewable
is_reviewable[Will I be able to review the expected AI output ?] -- No --> human_with_ai_review(🧠<br>Do it by myself, and use AI to review it)
is_reviewable -- Yes --> is_ai_faster
is_exact -- No --> is_ai_faster
is_ai_faster[Do I think AI can do this faster than I can ?] -- Yes --> ai_with_timebox(🤖<br>Try doing it with AI with a timebox, and fallback to do it by myself)
is_ai_faster -- No --> human_with_ai_review
is_urgent -- No --> is_learning
is_learning[May I learn something new if I do this myself ?] -- Yes --> is_interesting
is_interesting[Am I interested in what I could learn doing it ?] -- Yes ----> is_time_valuable
is_time_valuable[Do I value AI speed above the fun/interest of doing it myself ?] -- No --> no_ai(🧠<br>Do it by myself, recourse to AI in case I get stuck or need a review)
is_time_valuable -- Yes --> ai_with_timebox
is_interesting -- No --> is_exact
is_learning -- No --> is_fun
is_fun[Would it be fun to do this myself ?] -- Yes --> is_time_valuable
is_fun -- No --> is_exact
is_confidential -- Yes --------> zero_ai(🧠<br> No AI)
This flowchart essentially derives from a few observations and principles:
- the AI tools I have access to (mostly coding agents such as Claude Code or Codex) are already extremely good at a wide range of well-defined tasks:
- any tasks that I consider straightforward can be done by an AI with minimal guidance
- for any tasks outside of my current area of expertise, AI will perform better than I would if I do not invest myself into it
- the powerful models I usually rely on all run on remote servers, owned by companies in whom I have limited trust. I may try to run some models locally later, but I am not doing this at the moment.
- these tools need to be provided the relevant context to make good decisions, and sometimes it might take a significant amount of time to give that context to the AI
- some tasks are hard to properly specify without first actually implementing them
- AI is not good at giving up one way of doing things and exploring new ways. While working with AI, I often need to reset and start again, which can be very inefficient when setting up the context takes time.
- AI can hallucinate, and I can never trust blindly its output. Generally speaking, if I have enough knowledge to perform a task, I always trust my own output more than any AI’s.
- doing things by myself is an important way to acquire knowledge and maintain my own cognitive skills
- one reason I chose to pursue a career in software engineering and avoided the management track is because I also usually enjoy doing the tasks I have to do; delegating things to an AI can remove part of the fun
- I do not think using AI as I do participates to the dangerous hype around AI, or legitimizes its nefarious usage
This chart does not take into account:
- the price of AI: I currently do not have to pay anything extra when I decide to delegate something to AI: my employer covers the Claude Code and Codex costs that I use for almost everything.
- the ecological impact: I know that the inference process is quite heavy, but I still haven’t developed a good sense of how heavy it is compared to other processes (e.g. web search). I assume I’m not the only one in that situation, and it would be great to build tools to help AI users understand the ecological footprint of their AI usage, so that they can adapt it.
- the potential impact of future AI changes, for the better (improved capabilities) or the worse (higher price, higher bias): it only considers what AI can deliver now
I believe this flowchart is relatively generic and I would recommend it to most of my fellow software engineers, regardless of their seniority. This does not mean that AI should be used by everybody in the same way. This same chart, applied for the same task, can lead to different outcomes depending on the human side: for instance, junior software engineers should typically end up delegating less to AI than senior engineers, because most of their tasks are opportunities to learn something new.
Feel free to try applying it yourself, or share it with others!