The past few weeks the world of LinkedIn, and the world of computer scientists, geeks and investors has been ruled by the hype over ChatGPT.
OpenAI unleashed the beast. They show case how their Large Language Model with the correct training data is capable of producing very interesting output.
ChatGPT has been shown to write articles, make up quotes for Elon Musk., People even made it write software and user documentation.
Of course, a lot of these are just experiments that people ran, and that they published because they were impressed by the outcome. Not always in a positive way, some people also like pointing out that ChatGPT makes mistakes. Sometimes it even contradicts itself.
Beyond the hype there is also the lookout on how this will affect us in our work from now on. Some claim it will cost jobs, others claim that it will just change the way how we do things. The truth as always will be in the middle, I expect.
Bringing this question to Industry 4.0 and manufacturing, I see potential as part of digitalisation. A lot of data in our factories consists of numbers. Temperature, pressure, revolutions per minute, kilograms, liters, amps and volts, and so on. That is outside the domain of ChatGPT and GPT-3, which are large language processing models (LLM). ChatGPT has been proven to make calculation mistakes. When I asked it to write my obituary for when I day at age 121, it claimed that I was I’ll be in my 70s around 2060, while I was born in 1973.
However, we collect data from our production lines and store them in some form of database. We do have machine learning algorithms that help find patterns in these data stores. Patterns that we know and ask for, and patterns the algorithms discover by themselves.
Getting these on the table requires quite a bit of knowledge o how to train and trigger the algorithms, and the results have to be converted from raw data into a presentable form to mean anything to factory operators and managers. Data scientists are hired to accomplish those jobs, and they will be in the future.
What would happen if we could combine an LLM, or even a smaller Natural Language Processor (NLP) with our machine learning algorithms? Could we perhaps create a user experience in which an operator ask questions and follow up questions to do some of the data scientist job without being a computer expert? Imagine an operator asking a series of questions in plain English and getting actual results:
“Give me the deviations in the amount of flour dosed into our bread production line over the last 8 hours”.
Result is a table with times, identifiers and deviations.
“Identify the largest deviations and find a correlation between them and the time they occurred.”
Result is a smaller table, showing that around 50 dosings had large deviations in the same period of 20 minutes.
“Show me what else was happening in the factory around that time, near the dosing machine.”
Result is a list of running operations in the requested area, including the unloading of a number of pallets from a truck with incoming goods.
This is an example I’ve used before to show how sensor data and human interaction can be combined to find problem causes outside the normal production operation. Without complete data, it wil always require human interventions to find such correlations. With the correct data in place from production line, logistics, planning etc. they can be found by a data scientist through analysis and queries
By adding natural language processing, a domain expert like the operator in this example can do it themselves. Best part - the data scientist and the machine learning expert would still be needed. They still have to create the machine learning and language processing algorithms and train them. The user interface would change, and the feedback cycle would become shorter, that’s all.
Well, that’s all? Maybe not, but I’d be curious to hear your thoughts on this.