Nate Walker

Primed and ready to start the work day, you sit in front of your laptop cradling your morning cup of caffeine. You check your inbox, like many of us do, and find that a customer reached out to you with a quick question. As you begin to type your reply to them, “Thanks! I’ll get back to you…” your email software auto-magically suggests how you should finish that sentence, “...as soon as I can”. 

On a similar note, I can personally attest to how many times built-in grammar checks have saved my writing and emails, as I’m sure we all can. These are but small examples of how bits of intelligence, embedded into the useful software we all know and love, surround us already. Whether at work or at home, on our laptops or on our phones, machine learning and artificial intelligence are woven into the fabric of our daily lives. As useful as these technologies are, we owe it to ourselves to learn how to use them responsibly.

Chatting GPT

Now, let’s talk about the Large Language Model (LLM) in the room - OpenAI’s GPT. To say that OpenAI’s ChatGPT product has simply caused a stir in the tech world since release may be akin to only saying that The Beatles were just a popular band in the mid-20th century. ChatGPT almost single-handedly introduced the term “generative AI” into the modern vernacular in less than six months. ChatGPT’s seismic impact on the tech scene has spawned a tidal wave of generative AI innovation and prompted dozens, if not hundreds, of companies to launch initiatives built upon OpenAI technology or to craft clones of their own. OpenAI’s product portfolio is the definition of disruption and the public efforts of tech giants are evidence of that fact. 

ChatGPT User Interface Screenshot - SafeBase
ChatGPT User Interface

No new technology springs up without its fair share of unique vulnerabilities and controversies. For these generative AI systems, the newly-discovered vulnerability of indirect prompt injection has pounced onto the scene, presenting a new, low-tech threat surface for malicious actors to exploit. This attack occurs when LLM-integrated applications, like Bing Chat for example, are manipulated behind the scenes within a webpage. The app may appear to be working as intended, but the user is actually on the receiving end of choreographed LLM behavior, perhaps with the intent to steal your data, as shown in this demonstration. Data privacy issues alone were enough for Italy to temporarily ban the product outright until recently. These new vulnerabilities and privacy worries, among many other valid concerns brought to light over the past few months, are why the White House is investigating AI policy measures and many other legislative bodies across the globe are considering the same.

Consider this: If organizations already struggle to keep their people, products, and proprietary knowledge secure without generative AI in the mix, how can they keep pace with this rate of change and new product adoption? We don’t claim to have the magic answer for you, but we do want to help spread the word of responsible use, what it means for generative AI, and what it doesn’t mean.

Responsible Use Is Not Avoidance

At SafeBase, we aim to be as proactive as possible when it comes to security concerns, and generative AI is treated no differently. 

As two of our core values at SafeBase, we take trust and transparency very seriously as a company. In an IT and systems context, this means that:

  1. We, as a security department, trust our colleagues to act responsibly and in accordance with IT/security policies, and
  2. Our cross-functional teammates know that they can be transparent with us as a security department should any concerns ever arise (e.g. a potential incident).

In addition to our usual security awareness briefings, we sought to have talks with team members about ChatGPT, generative AI, and what it means to responsibly use these new products. Rather than lock individuals out of tools, potentially hindering the discovery of ways to empower their teams’ productivity or be a barrier to product research, we feel that the answer is to share knowledge, engage in conversations, and discuss what responsible use entails with generative AI. Technical controls are always something that can be implemented, but not to the detriment of user edification or research and development.

The meteoric rise of a product like ChatGPT will, naturally, raise concerns among some organizations, especially within heavily regulated industries like finance, telecommunications, and/or competitive industries like the tech sector. This is why some well-known companies like JPMorgan Chase and Verizon have blocked use of ChatGPT outright, preventing all employees from using it or any associated products. Some mega companies, like Amazon, are reportedly not blocking access but warning their employees to not submit sensitive information. 

If companies are blocking ChatGPT outright, it may likely be due to how OpenAI initially had users opted-in, by default, to having their data contributed to the language model upon launch. In spring 2023, OpenAI changed the data policy for their API to say that data submitted is now not automatically going to be used to train their data models by default (please note that policies and options differ between their free product, ChatGPT,  and paid-for services like ChatGPT+, and their API). Additionally, OpenAI has developed a Trust Center (powered by SafeBase) to transparently share their security and privacy practices while addressing any concerns.

These companies do make a valid point when it comes to concerns over data exfiltration, like in the notorious case of Samsung employees submitting sensitive data and source code. However, cutting off access entirely does not allow room for training about responsible use, and, likewise, does not give space for organizations to explore potential use cases for these tools within their own ecosystem.

Perhaps there are use cases that do not work for your team and/or company. If your company already has strict IT controls in place (financial institutions, for example), you may be able to take a more granular approach to enabling or preventing access. In cases like this, you could explicitly block domains that are suspect or pending a security review. Admittedly, this is a difficult approach as the sheer amount of generative AI-driven sites, products, and browser extensions being released is difficult to keep track of and increasing every day. Allowlisting of generative AI applications, on the other hand, if allowed at all and communicated clearly across the organization, could be a more sustainable and approachable technical control. Through allowlisting, products can be allowed into your company’s portfolio as appropriate use cases are identified.

The approach that we advocate for here is a combination of both technical and human, monitoring and/or appropriate filtering at the infrastructure level and elevation of awareness at the people level.  

By tweaking your existing Responsible Use Policy to include references to generative AI use, you acknowledge the unique role and the likelihood of associates to seek out these tools. While, at the same time, cementing what is acceptable for you and your company’s needs.

Responsible Use Begins With Awareness

Many of you reading this will have been on the learning end of some form of awareness training, whether it be related to security, privacy, or corporate ethics. Why do awareness training programs exist?

There isn’t an awareness training program on the planet that would herald their ability to make a user impervious to phishing attacks, or how, by a user completing their training, that user will never, say, accidentally spill confidential information. That is not the point. The goal behind awareness training is to plant the seed of responsibility through user awareness, then cultivate that awareness level over time to ensure that responsible use remains top of mind.

Could company training programs use some refreshing and perhaps a bit of razzle-dazzle to be more exciting? Of course! On the heels of, potentially, dull and dry training, companies can take simple steps to sprinkle awareness and responsible use reminders throughout the year. This could be:

  • Through small, digestible chunks of content (e.g. infographics) with relevant info
  • Short, conversational presentations in company or departmental meetings
  • An open Slack and/or chat channel, where questions can be asked and content shared 

To help spread the word about responsible use of generative AI here at SafeBase, we developed a simple acronym to help our teammates consider safe use - that they use generative AI with CARE:

  • Conscientious: You’re aware of how to engage with the product safely and responsibly
  • Attributable: You’re concerned with the need to attribute generated information, especially when sources are not returned automatically
  • Reliable: You’re alert to the pervasive issue of fabrication (a.k.a. “hallucinations”) and you double-check the accuracy of what’s returned
  • Educated: You’re educated on prompt engineering and how best to use the product

Additionally, we are taking steps to make sure our users are aware of opt-out procedures and where to go to ensure that such steps are taken in full.

The idea is to be creative in your approach, depending on your business’s use case and what is most appropriate for your user base. 

The Case for Generative AI Policy

“Wait, even more policies? No, thanks.” And, I hear you! 

Your company likely already has many policies. They are usually acknowledged upon hire and then forgotten about, unless proper security awareness training provides refreshers of important policy points.

There is a case to be made about a powerful new technology, generative AI, that is likely here to stay. This new form of AI will be contributing to existing tech stacks and business applications in exciting ways both now and for years to come, so security departments should institute additions to existing policies or develop new policies altogether. Silver bullets do not exist as the policy decisions you make should be reflective of the needs of your organization.

Guidance does exist, thankfully! Caroline McCaffery, lawyer and co-founder of startup ClearOps, makes the point that companies must differentiate between consumer-facing products, which usually offer limited security and data privacy safeguards, and enterprise-level offerings, which have security and privacy protections baked into contractual agreements. McCaffery highlights the potential need for multiple policies, or at least the need for security and/or privacy professionals to help their fellow associates parse the differences in terms and conditions. 

Questions to consider are wholly your own, but some examples to get you started are:

  • Do you forbid use of free tools only, yet permit tools you can purchase as a company after a security and privacy review?
  • Do you restrict use of generative AI products to only certain creative departments who have a specific need, while limiting access within other departments? Example: Marketing (creative work) vs. Engineering (may submit source code)
  • Or, do you restrict all access to all generative AI, like the companies mentioned above?

All of these possible choices are valid, depending upon your needs and your company’s level of risk tolerance.

Needs here vary widely and no one policy, or addition of several targeted policies, will equal perfect security from generative AI misuse. A combination of awareness training, discussions, and policy changes are one way of proactively addressing concerns your organization may have. Remember that no one policy is set in stone, especially not one addressing something as fast moving as generative AI. Leadership, security, privacy, and legal voices should be part of the conversation.

Get the Ball Rolling with Helpful Resources

Part of your approach to helping users interact with this exciting new solar system of tools is to provide them with methods of proper use. This will take some effort to collate resources, but, whether you share links to materials or conduct live training, the desired impact is the same: safe, secure, responsible use.

Here are a few articles to get you going on your journey:

What are AI hallucinations and how do you prevent them? (Zapier)

Zapier outlines how to target your prompts to avoid inaccuracies in returned information.

What is responsible AI and how can it help harness generative AI? (PwC)

This article by PwC highlights several points about responsible use, including an entire section at the bottom, titled, “How to get started using generative AI responsibly”.

Task Force on Responsible Use of Generative AI for Law (MIT)

Though not completed and published as of this writing, MIT will be making available a formal study on the topic. Individuals may sign up to be notified.

How enterprises can navigate ethics and responsibility of generative AI

Here, the author talks about the inherent risks present once companies adopt generative AI tools. Helpful considerations are shared.

SafeBase is the scalable Trust Center that automates the security review process between buyers and sellers. With a SafeBase Trust Center, companies can seamlessly share sensitive security documentation with buyers and customers, including streamlining the NDA signing process by integrating with your CRM and your data warehouse. 

If you’re ready to take back the time your team spends on security questionnaires, create a better buying experience, and position security as the revenue-driver it is, get in touch with us.