3 AI pitfalls to avoid on the road to business growth

Alex Russell, Regional Sales Manager, SADC at Nutanix

According to a recent report, more than 30 AI communities in Africa offer various services in the field. Tunisian AI start-up InstaDeep received $100 million in funding early last year, with the market across the continent projected to grow by 20% annually from 2022 to 2029. But even though AI promises significant business benefits, there are still aspects that companies need to avoid to prevent potential pitfalls.

1. Rising Costs

One of the most significant challenges is managing the cost of running AI models. Even before AI, businesses have found it challenging to keep their cloud adoption costs low. Developing a business case for AI is fundamental to controlling spiraling costs. It is a straightforward calculation: the greater the cost incurred when running generative AI, the greater the required benefit to achieve a return on investment.

Massive amounts of general-purpose and purpose-built data drive AI engines. All this data must be stored, managed, and secured, incurring costs at each step. To better manage these expenses, decision-makers use the public cloud to train their AI models before running them on the company’s infrastructure, offering better cost control.

Considering that it can be almost five times more expensive to run a compact AI model in the cloud than in an on-premises environment, it makes business sense to do so.

2. Controlling Data

Every organization must recognize the importance of securing and controlling its data. An obstacle companies must avoid when implementing AI models is violating data security and sovereignty regulations, especially in Africa, where each country has different compliance requirements.

Managing data compliance in the public cloud becomes difficult, even with traditional workloads. With the introduction of new AI models, governance policies must be adapted to this advanced technology, making it increasingly complex.

Multinationals must prioritize data sovereignty. With the privacy implications of using AI still under evaluation, a global consensus on governing AI models and the data used to train them is likely. Recent reports about authors suing OpenAI over the ‘theft’ of their copyright work to train AI models demonstrate the importance of addressing this issue.

One way companies can mitigate data sovereignty risks is to ensure that operations in South Africa use South African data, and AI models are used only in South Africa. Introducing data from Nigeria or Kenya into the South African model could present regulatory risks.

Answering the questions posed by AI in the regulatory environment will take time to address. In the meantime, companies must maintain control of their data and applications, which is more challenging in public cloud infrastructure that they don’t own.

3. Security First

When developing a customer service chatbot, you must include proprietary product data in its training, considering multiple security concerns. This sensitive information is unlikely to be comfortable for organizations to keep in the public cloud.

Businesses need to realize that any data copied into AI platforms automatically becomes open source. The consequences of employees inadvertently pasting trade secrets or other sensitive company information into such an environment can be devastating.

AI projects are typically used to gain a competitive advantage, which can only be realized if the models and the data used to train them are kept secure. Using open-source platforms negates any potential benefit of using AI in the first place.

While AI can reimagine how businesses, governments, and society operate, being aware of these pitfalls is crucial. Just as freely available social media platforms capitalize on personal data, AI models must be carefully scrutinized before use, as the consequences may not be worth the reward.

By Alex Russell, Regional Sales Manager, SADC at Nutanix