Go to Aleido homepage

Learn about what it’s like to work at Aleido and explore job openings.

Choose your market

You are currently at: International (English)

Text about search

Applying AI to technical information

AI today offers many opportunities, more than ever before. But in the world of technical information, just being able to answer questions is not enough. The information must be correct. That means just applying AI in form of a Large Language Model (LLM) on the technical information is bound to bring certain risks. So, to make sure that you invest in AI in the right way, we have made a summary of why only applying an LLM is not enough for technical information. And what you should do instead.

An LLM has been trained on large amounts of text and data to be able to generate human-like text and answers to questions. They are powerful tools to generate natural language and answer questions, but since they are trained on general and not technical information, the LLMS are flawed when it comes to specialisation and conceptual understanding. This means that the LLM is not capable of giving the precise and correct answer that is required for technical information.

There are several reasons why an LLM cannot replace the need for well structured and correct technical information:

  1. Trained on general information. LLMs are generic language models, not specialised in any specific subject area. Technical information can be very specialised and requires a deep knowledge within a certain area. Something an LLM is often lacking
  2. Quality and breadth of data. Despite the LLMs being trained on vast amounts of information there is no guarantee that it covers all technical areas equally. More often than not, there are gaps in information where technical terms or concepts do not occur often enough for the model to understand them correctly.
  3. Conceptual understanding. Even if LLMs can generate text which is grammatically correct, they often struggle with understanding deeper meaning and connections between technical concepts. The LLM can answer a question based on superficial understanding without regard for the technical context.
  4. Flawed management of complexity. Technical information can be very complex and often demands a deep understanding of the product or system as well as an extensive ability to explain using text, images and animations. LLMs often have difficulty handling this complexity and can instead give simplified or even wrong answers.
  5. Insufficient source management. LLMs do not have the capacity to assess sources and verify information. They can draw faulty conclusions based on wrong or unreliable sources, which can lead to incorrect results when it comes to technical information.

What you should do

Only applying a LLM can of course be done. But that would increse costs and time to train it properly, still without the guarantee of high quality output. What you can do is implement a Retrieval-Augmented Generation (RAG) system together with the LLM. RAG systems are a way of ensuring that the information you put in to the LLM is correct, and that the LLM only uses this information to answer questions about the technical information. It also adds traceability, that the source of the answer is known and easily checked. Making sure the output from the AI is correct.

Implementing a RAG however, only helps so far. To be able to gain most value from an AI solution, the input must also be properly organised and structured. Having a powerful Content Delivery Platform (CDP) will ensure high quality input.

Contact

Curious to know more about AI?

Lars Löfgren

Business development manager

Aftermarket information