No matter the level of performance of artificial intelligence (AI), it will not be able to provide satisfactory results if it is fed with fragmented or even erroneous data.
In fact, a good quality, reliable and well-structured database is absolutely essential. Therefore, it is critical for companies to implement effective data management.
According to McKinsey, Most activities that account for 60-70% of daily work time today can be automated in the future using AI-based technologies such as predictive and generative AI (GenAI).
Observation also shared by LaborIA, a research laboratory dedicated to artificial intelligence, created by the Ministry of Labor, Full Employment and Integration, in its 2023 research on the impacts of using artificial intelligence systems (AIS) in companies and public organizations. It appears, in particular, that AI is generally implemented with the aim of reducing the risk of error (85%), improving employee performance (75%), and reducing tedious tasks (74%).
With the craze for GenAI and great language models (LLM), the topic of artificial intelligence (AI) has experienced new growth among companies. Although the implementation of GenAI applications in the commercial environment is still in its early stages, Bloomberg estimates that the market could reach US$1.3 billion in the next ten years.
In this context, the expectations of the board of directors and management are very high. However, for the technology to deliver tangible improvements, it is first necessary to find relevant use cases that lend themselves to AI applications.
Furthermore, requirements for AI use cases and models differ not only across industries or companies, but also across the level of AI maturity. This is because companies that naturally generate large amounts of data through the use of IoT devices have an advantage over other companies when it comes to data technology.
However, this does not mean that they will automatically achieve the expected success in AI. Typically, analysts and experts estimate that between 60 and 80 percent of AI projects fail. The reason is that the amount of data is not enough, it must also be of good quality.
Without quality data, it is impossible to reap the benefits of AI
When asked about the risks delaying the implementation of AI in general, and GenAI in particular, companies often cite a lack of time, financial resources or skills. Also according to the McKinsey study, 56% of companies believe that the risk lies in potentially incorrect results.
Weka’s 2023 Global AI Trends Report also demonstrates that the biggest barrier to AI innovation is insufficient data management (32%). This clearly indicates that the current data architecture of many companies is not yet ready for large-scale changes. The challenges are therefore largely due to poor data quality and/or poor data management.
Poor data quality leads to problems such as inaccurate predictions and decisions, distortions, wasted resources, and even legal repercussions. Therefore, the higher the quality of the data, the more useful and reliable the results will be.
To achieve this, companies must first determine where data is located on their network, its quality level and how it is obtained. This immediately gives rise to another challenge: data integration. In fact, the training data that AI systems need comes in different forms, comes from multiple sources and has varying volumes. However, with the increasing complexity of IT, where data silos, duplicate data, incompatibility and complex ETL processes develop, it becomes increasingly difficult to gather high-quality data.
At the same time, it is essential to democratize data and make it accessible so that users and systems can access it easily. In this context, companies planning or already implementing AI projects must take into account data protection regulations, such as GDPR, the next EU AI Law Guidelines and the latest international agreements on AI safety.
How is data ownership and use regulated? How are access, security and privacy guaranteed and controlled? How to avoid possible biases in AI systems? Who is responsible for where the data ends up and what is done with it?
Taming data chaos through effective management
AI models depend on companies’ ability to identify, collect, prepare, manage, protect and make relevant and reliable data accessible in order to achieve good results. A centralized, scalable, and automated data management solution that addresses the challenges outlined above can help. Thanks to different functions, it connects, unifies and democratizes data and thus organizes a complex ecosystem:
> Data cataloging, to facilitate the identification, classification and traceability of data (data lineage).
> Data integration to integrate data in different formats and from different sources to create an agile data pipeline.
> Data Quality, to get an overview of the status of all data across the entire data pipeline to identify anomalies, duplicates, and inaccuracies. To improve data quality, data cleaning and normalization rules must be implemented, which must be integrated into the data pipeline.
> Data management, to provide accurate, consistent and reliable data.
> Data sharing, for reusing trusted data and AI models.
> Data protection, privacy and governance, to manage data quality, privacy and compliance. For example, companies no longer need to transfer their information to a (more vulnerable) public cloud.
In this context, the following question directly arises: is the data architecture designed to manage the increasingly complex data ecosystem and to effectively automate the growing number of AI use cases?
In most cases, the answer is no. Furthermore, data management certainly allows you to create a model, but it does not allow you to integrate high-quality data in a repeatable way. This is why it is essential to implement a solid, modern data architecture in the form of a data fabric and/or a data structure.
Companies that want to automate, accelerate and make their processes more efficient using AI and generative AI should, if they have not already done so, look at the necessary fundamentals of ‘AI, the basic condition for efficient, safe and fast implementation . On the other hand, they should move as quickly as possible from the hype phase to identifying relevant AI use cases for their core business.
They then need to determine what they need to implement, how they can update their data management, and whether this requires changing any part of the underlying data architecture.
Therefore, it is not enough to create an account for public AI applications like GPT Chat. On the contrary, it is about considering AI projects in a holistic and sustainable way, in order to also be prepared for future requirements. Without reliable, quality data, the journey towards a future with AI and generative AI will be doomed to failure.