The past 8 months have been interesting in our world of computer science and software engineering. With ‘our’ I mean all of us, but also the teams I’m working with right now. Recently I wrote a bit about Unified Namespaces for industrial automation, which is something we are spending a lot of time and effort on, but in the background there’s also the seemingly being developments in the world of Artificial Intelligence. Suddenly, everybody seems to be on top of data again, and AI is going to do all the impossible stuff with it that we have been unable to do for years.
So, are we preparing to be out of a job soon? Or are we going to be the front runners that make it happen? For the time being, we are neither, because there is no need. Artificial Intelligence has been around for decades already, and developments have been speeding up a lot in recent years, but there’s not much to be afraid of as far as I’m concerned. On the contrary, just like other developments before it, starting with the printing press and the first industrial revolution, AI is going to help us get rid of the next batch of repetitive work. That’s what it’s been doing in some areas already - what AI algorithms for image detection do is not so different from the manual image processing activities we’ve been using since the 1980s, except that they do it on their own, and slightly faster than a human selecting the processing steps by hand.
At the same time, it is also becoming clear that writers, who were the first to fear for their jobs when ChatGPT showed up, don’t have to be afraid yet. The branch of AI that underlies ChatGPT, Large Language Models, is capable of doing a lot of things with language, but it lacks something that human writers have. ChatGPT can write texts based on what it gathers from historical data it was trained on. Not just based on the content, but also language structures. However, being a piece of technology, an algorithm, it has a lot of problems capturing emotions in the way humans do. It can’t easily judge whether it should touch somebody by making them sad, happy, or angry. With clever prompting (prompt engineering appeared to be a new job description for a few months) it may come close to what you’d expect, but it can not consistently implement the real emotional touch of a human writer. And without that clever prompting the output becomes repetitive, boring drivel, as somebody wrote in an article on using AI for commercial text creation a few weeks ago.
What is important for AI algorithms to be successful is the same as for engineers: an algorithm can often not come up with a fitting solution is knowing the context in which that solution is to be used. Take our own work on Unified Namespaces, UNS: if we don’t know in what context we are going to apply the UNS, we won’t be able to select the right technologies to implement it. Neither will an algorithm that is asked a similar question. Without context, both the engineer and the algorithm will regularly come up with solutions that don’t fit. Referring back to ChatGPT, a good example was given by Cap Gemini’s Robert Engels in a Medium article (https://medium.com/@dutchbob/ai-is-useless-without-context-491d13008584). ChatGPT as it was released in November last year, was based on Open AIs LLM GPT-3. GPT-3 itself could answer questions in a similar way as ChatGPT does, but it was proven to make context related mistakes. The example that Engels gave is that GPT-3 would easily become racist in it’s answers, because it was not given the context of how to act in a civil conversation. What OpenAI did with ChatGPT was put a context layer around it’s GPT models, that is capable of detecting unwanted outcomes and make the model function better in relation to the real world.
That same layer is what allows us to give the model some context to guide it toward an acceptable outcome - that’s where ‘prompt engineering’ as I mentioned earlier plays a role. However, even with such layer, models or algorithms are not yet at the point where they can really replace a human in any given area. In software coding, ChatGPT and it’s quickly growing number of competitors and alternatives still make mistakes. In writing, the models still do not show emotion. Similar reasoning applies to other domains no doubt. We still need to provide context, and as long as the models are not aware of the world around them, we need humans to provide that context. So, we’re not afraid, but we are curious to what the future will bring. Also, curious to hear your opinion about how this is going to develop, feel free to drop me an e-mail or a message after you read this.