How is AI reshaping the way we live, create, connect, and evolve?
On June 13, Shared Futures: The AI Forum will bring together the cultural architects of our time to explore.
How is AI reshaping the way we live, create, connect, and evolve?
On June 13, Shared Futures: The AI Forum will bring together the cultural architects of our time to explore.
Workstream
Topic
Share
In recent months, the growing use of Generative Artificial Intelligence (GenAI) technology – including general-purpose and publicly available foundational models like GPT-4, LLaMA, and DALL-E – has captured the public’s attention and dominated news headlines. It has also generated widespread concern about the potential consequences that could emerge from the rapid, unrestricted use of GenAI-based tools. While AI has been around for years, the public availability of large, general-purpose foundational models combined with a significantly lower barrier to access the power of those models is new. This has introduced risks and questions that public and private sector leaders are only beginning to consider.
To help add clarity to this rapidly unfolding conversation, the Aspen US Cybersecurity Group convened experts to draft high-level guidance on how companies can inform employee use of openly available GenAI technology. What follows is a template guidance document that the Group developed for use by a broad range of organizations. Organizations can revise to fit their specific needs and share with employees and business units that use, rely on, or are considering how employees can or should use openly available GenAI-based solutions (as distinct from use of company-approved GenAI enterprise products). The guidance is targeted towards general employee populations and is designed to serve as a baseline document for companies to adapt to fit their specific organizational needs. No organization should adopt it without first reconciling it against its own unique policies, procedures, and legal and regulatory guidelines.
Importantly, the guidance provided below is relevant as of this report’s publication in September 2023. The GenAI playing field is changing quickly, and before using this document, organizations should assess whether it is still relevant to their specific organizational needs and to the current state of technology, law, and regulation. We encourage organizations to use any portion of this guidance that is relevant to their needs and to modify it as they see fit. No attribution is necessary.
Generative AI (GenAI) is the latest development in the field of AI and is an advanced form of machine learning that creates an opportunity to produce new content using large language models (LLMs) that have been trained on large amounts of data, including audio, text, images, code, simulations, and videos. GenAI, as with all other forms of AI, has the potential to transform our business, our industry, and our competition.
We encourage all Staff to explore and innovate with GenAI, but to do so responsibly.[1] This means understanding GenAI’s risks and limitations as well as abiding by the general guiding principles outlined below. We require that you use GenAI-based tools consistent with this Guidance. If you are in doubt, please consult with your supervisors and leaders. As we continue to define further use cases for our businesses and operations, we will provide additional guidance. This guidance is current as of September 20, 2023.
[1] This guidance addresses how you can use openly available instances of GenAI, such as ChatGPT, Bard, or Bing AI, and not company developed or approved enterprise products. For those tools, please follow guidance specific to them.
Before beginning to use GenAI, it is important to understand its limitations and associated risks. Our ability to control and monitor the following aspects of model implementation is key to our ability to exercise responsible AI practices.
Foundational GenAI models are built on a very large pool of training data, but these sources are still finite. As a result, that data, its uses, and outputs may:
The data issues that can occur with openly available models generally do not exist with our internal GenAI models, which are typically built for a specific purpose and which we can manage. Therefore, while this technology brings great potential, when using openly available models you must consider the risks to our business and reputation in every engagement.
You are responsible and accountable for your use of GenAI. To help guide your use of GenAI-based tools, we provide below basic principles that should inform your thinking as you prepare to use this technology. We also provide several “dos and don’ts” that we believe to be leading practices. However, it is important to remember that the GenAI landscape is still rapidly developing, and the capabilities, norms, and rules around use of the technology are not yet solidified. As a result, whether and how these principles and best-practices apply to your specific facts might change with time. If in doubt, ask.
As you begin to use GenAI you will need to:
[2] You can consider models such as the National Institute of Standards and Technology’s AI Risk Management Framework, https://www.nist.gov/itl/ai-risk-management-framework, or the International Organization for Standardization’s guidance on managing AI risks, ISO23894, https://www.iso.org/standard/77304.html.
[3] Aspen Digital’s emerging technologies team has developed a new primer on this topic, Intro to Generative AI, https://www.aspendigital.org/report/intro-to-generative-ai/.
The following are examples of acceptable and unacceptable uses of publicly available GenAI technologies. Nonetheless, these dos and don’ts are built on the following baseline assumptions: (a) you only provide non-personal, non-confidential, and non-proprietary and/or public information to the foundational models; (b) no output is used for a commercial purpose; and (c) any code or model generated is not put into production unless there is specific guidance and approvals allowing you to do so.
[4] AI governance is a set of principles, policies, and practices that govern the development, use, and ethical implications of artificial intelligence (AI).
When exploring GenAI with commercially available tools, don’t:
The Aspen US Cybersecurity Group is the leading cross-sector, public-private forum for promoting a secure future for America’s institutions, infrastructure, and individuals – in cyberspace and beyond.
Over the past three years, companies from the West have acted to provide cyber defense assistance to Ukraine. What are the lessons learned?
The Aspen Institute’s US Cybersecurity Group recommends a set of cybersecurity priorities to the incoming Administration.