Paco's Log

Do I even know what I am doing? Reflections on the use of AI coding assistants for ML work.

AI coding assistants are mainstream. They are used across industry, promising a boost in performance, showing you can achieve more, do more, even when you are not in front of the computer. Keep coding, keep producing, there is no time to loose. However, that makes me think about a small, but important question: Do I even know what I am doing anymore?

It is undeniable that AI coding assistants make us more productive. I can achieve now more in one day that what I could achieve in a week. In the past few months, I have been able to implement features to software packages that it would have taken me multiple web searches, bug fixing and probably many tears to implement. All with almost zero effort on my size, just minor supervision on the changes that were being made. Even though I have been working on ML for a while, I can feel that I am having a cognitive decline in terms of working capabilities. This has been widely studied 1, 2, and the data is there to show that AI, used without any type of critical thought, is decremental of our ability to produce either work, or improve our abilities (this makes me feel quite bad to be honest). This happens to both experts and non-experts. This poses for me the following question: Experts should be able to analyze the output of the model and detect flaws, what about non-experts?

Even though this happens across multiple areas, I want to touch upon one of them given my closeness to that specific area: ML research/engineering. Nowadays AI implementation is being force to any company by the management. You might not have any kind of AI knowledge, however, now you need to implement a new flashy model on your problem. Was it even necessary? Who cares, just label it as AI and that’s it. This lead to one of my main concerns on AI coding assistants for ML research/engineering: without any basic knowledge of ML, no matter you are training a model, it can be completely useless, and it requires twice the amount of work if it is going to be implemented in a real-world scenario. The work used to implement it in the first place, and the work of an external reviewer to make sure it makes sense. Recently, I asked someone if they could show me the code used to train a model and this person had to asked the LLM which script was the one used to train the model. This is especially cumbersome for AI benchmarks. If people are benchmarking models without knowing what they are doing, how can we even know that benchmarks are correct? This, undeniable, shows that even though AI coding assistants are democratizing ML research/engineering, there exists a worsening of the capabilities of the people that is implementing them. Given that these models are being used more often than ever, this is also going to impact the capabilities of the models and their perception by the public.

Am I condemning the people that are using AI coding assistants to implement models without any prior knowledge? Of course not, I reckon is a capitalism problem. Capitalism tell us that we always have to produce more. We cannot stop producing, and AI coding assistants and AI agents are just a reflection of that. We do not care about learning, understanding, or even internalizing concepts that help us produce more robust ideas and work anymore. Why should I do one task with full control of what is going on when I can accomplish five times more without having to worry about the nitty gritty details, only worrying about the outcome? Nowadays, the important thing (elegantly presented by AI influencers) is that you can make a product/idea without any prior knowledge using a few prompts, and in the matter of a few hours.

At the end of the day, we are just falling into the same capitalism trap: producing more is always better. iIndustry is falling for the same lies that, in my opinion, will lead us to more unfulfilling work, and less control in what we do. Unfortunately, capitalism always win.

P.S: This post was written without the use of any AI-based language assistant, so it can contain grammatical errors since my knowledge of English needs to be improved, but I love that’s the case :)