Image by Freepik

By Vivek Behl, digital transformation officer, WalkMe

Imagine that you are a finance manager who has just been asked to report on the latest monthly transactions to your chief financial officer (CFO). To save time, you turn to ChatGPT to have it written for you and input all the information you have. With just a few clicks, you have yourself one professional-sounding report that immediately gets the point across with minimal grammatical errors.

Unfortunately, the platform can also absorb the transaction data to train its models and that data can then be replicated and shown to other users, posing a serious threat to companies’ security and compliance.

This is particularly concerning as employees are increasingly turning to generative AI applications to work faster and more efficiently. The trend has given rise to the term ‘shadow AI,’ which refers to the use of AI applications without the explicit knowledge or control of an organisation’s IT department.

It has reached a point where 76 per cent of HR leaders believe that organisations will lag if they do not embrace AI solutions within the next 12 to 24 months, according to Gartner.

To counteract this issue, Southeast Asian governments are already taking steps to advance AI governance. Singapore led the charge through the launch of its National AI Strategy in 2019 and the Model AI Governance Framework the following year.

Similar strategies have also been published in Indonesia, Thailand, Malaysia, and Vietnam. Not to mention, the Association of Southeast Asian Nations (ASEAN) has welcomed the creation of an ASEAN Guide on AI Governance and Ethics to close the digital skills gap concerning AI.

However, organisations themselves need to be able to guide their employees on safeguarding their data, especially when they are engaging with generative AI applications beyond their control.

Data privacy concerns

While shadow AI in practice can boost workers’ performance and creativity, with the use of popular generative AI applications like ChatGPT, it also raises the risk of data privacy violations, as employees may not know the difference between inputting information into a generative AI platform and doing the same in a Word document. For example, users who rely on generative AI to rewrite emails or review code may inadvertently expose proprietary information to the algorithms for use in future responses.

Balancing risk and reward with AI governance

While well-meaning employees can inadvertently leak sensitive company data using unregulated generative AI applications, there is also the threat of malicious attacks. In today’s ever-evolving threat landscape, data thieves and cyber attackers are constantly roaming the digital space, looking for vulnerable and sensitive information they can use to target organisations.

Simply banning the use of generative AI applications may be helpful in blocking malicious attempts to intercept company secrets and accidental data breaches, but employees can miss out on key benefits that can otherwise drive desired business outcomes.

After all, there’s a reason why employees are turning to these applications to begin with – because they offer fantastic productivity, efficiency, and creativity gains for both employees and businesses.

While integrating a traditional data loss prevention (DLP) solution may seem like the logical approach, it is not exactly tailored to the specific risks posed by the trend of shadow AI. One reason is that employees will not receive contextual information that will help them better understand their mistakes.

As a result, they will be more likely to put sensitive company information in danger. A DLP solution also requires a lot of work to maintain, and even then, it may not be able to identify all cases. Rather than relying on this approach, organisations should instead adopt a more proactive approach that puts employees first and sets up guidance and guardrails to safely engage with these new AI technologies.

Guiding employees on using AI responsibly

One of the best ways organisations can help users avoid data leaks when engaging with AI tools is via digital adoption platforms (DAPs). DAPs sit on top of software and applications like a glass layer providing customised guidance and automation to employees while generating unprecedented data insights into user behaviour analytics and therefore the ability to continuously improve productivity and user experience.

DAPs can deliver pop-up alerts explaining specific company policies for certain websites or applications so that users are better informed on what they should or should not do. These messages can also be segmented for specific groups of people based on their department, location, role, and security clearance.

Beyond just blocking certain applications altogether, they can hide certain functionality that is particularly risky. Instead of choosing between blindly allowing employees free reign of the wild world wide web and banning innovative new AI technologies, DAPs can enable employees to safely harness the power of these technologies with guardrails already in place.

Because of the benefits that generative AI applications offer in advancing employees’ capabilities, boosting productivity, creativity, and business output, it is unlikely that they will fade away from the working landscape anytime soon.

Instead of banning their use altogether, organisations need to help their employees use these technologies safely with the right guardrails in place. With DAPs, shadow AI can be reined in and employees can get the full benefits of these tools without jeopardising their company’s wellbeing.

LEAVE A REPLY

Please enter your comment!
Please enter your name here