Loading...
 

Generative AI

What is AI? What is Generative AI?

 

There are different ways of defining AI but in simple terms, it’s about the automation of tasks that might have once required a level of human intelligence to complete. Modern AI systems are built using a technique called machine learning which involves using a lot of computing power to find statistical patterns in vast amounts of data. These systems can make inferences from these data inputs in order to produce outputs such as “predictions, content, recommendations or decisions.” (OECD

There are many types of AI but the focus of this page is generative AI.

Generative AI is a specific kind of AI that uses the transformer architecture to generate novel content. That might include text, images, code, audio or audio-visual content. GPT stands for ‘generative pre-trained transformer’ and it relates to the fact that these models are pre-trained on vast amounts of data. They can also be ‘fine-tuned’ with additional data or they can be mapped to a predefined knowledge base to customize or control the underlying model for more specific application or domain (ie medical use, legal tech etc).

Video: What is Artificial Intelligence?

Common AI terms

 

Where can I access Generative AI?

 

Popular generative AI systems include OpenAI’s ChatGPT & DALL-E, Davinci, Stable Diffusion and Google Gemini. Many of these systems have a free version in addition to paid options. Some of these systems are ‘multi-modal’ - able to generate more than one kind of content from a text prompt, while others can only provide one kind of content, such as only images.

Generative AI is also being directly incorporated into a lot of popular software including search engines like Google and Bing, enterprise tools like the Microsoft Office Suite or Google’s G-suite, and popular website platforms like Squarespace and Wix. It’s also increasingly part of other creative tools such as the Adobe Creative Cloud, image template tools like Canva, newsletter tools like MailChimp, a range of video and audio editing software and social media platforms. It’s likely that most software will include some level of generative AI capabilities in the not too distant future.


How do I use Generative AI?

 

Using generative AI is as simple as typing in a prompt - a set of instructions for the AI system to follow. You do this in plain language. There are courses you can take that provide tips on how to get better results from a prompt, which is often referred to as prompt engineering. In general, the more context you can provide and the clearer you can make the instructions, the more precise that AI system will be in responding. There are many examples of how this can be done for text. This can be very difficult when it comes to images if you wish to develop something precise. However, it can provide general images that relate to your prompt.

 

How could I apply Generative AI in my work?

 

People are using generative AI to:

  • Provide a skill you don’t have. For example, if you need an illustration but you can’t draw, you can use a text-based prompt to “draw an elephant standing on a surf-board” and the system will generate a range of options that approximate your instructions. You can prompt generative AI systems to create text, code, visuals, audio, video or audio-visual content. 
  • Augment or enhance an existing skill. You might prompt a generative AI system to ‘write a 500 word article on managing an event’ or you could use the system to edit or even rewrite your text. Tools like Grammarly  use generative AI to provide writing assistance.
  • Automate a task so you don’t have to do it or to get it done faster. These systems can generate social media content which can be scheduled so that all of your posts are automated. It can transcribe meeting notes and summarise long documents. It can be used to write reports, grant proposals or handle other administrative tasks. 
  • Broaden perspectives and provide ideas. True to its name, generative AI is good at generating a lot of ideas. You can ask the system questions and get ideas that would help move your project forward. The practice of creativity with generative AI is crucial. A simple rule to follow is that there should be more thought put into the final product than taken from it. Without creativity and critical thinking, AI dependency can decrease autonomy and cognitive problem solving. Using AI responsibly and for good must better the human's skills, abilities, and creative function.

Research has shown that generative AI tends to add greater value for users when they are below average at performing the task for which they want to use generative AI. People who are ‘experts’ at a particular task, might actually be hindered by applying generative AI. This provides some guidance for users to think about which tasks would most benefit from use of the technology vs which tasks they may wish to tackle on their own. When using generative AI, it is helpful to think in the same terms you would using Wikipedia. Wikipedia as a "starting place" is by now and, for the most part, common to most of our digital literacies. Experts might only hinder their progress if they were to start out with the Wikipedia article for their research, whereas Wikipedia as a starting place for non-experts can provide an excellent launching point - so long as the researcher understands that Wikipedia is not always correct or thorough on any given matter, especially when it comes to specifics. 

 

 

There are many ethical and legal issues surrounding generative AI. One way to think about these issues is to group them into three categories - how the technology is made, who controls it and how it’s used.

How its made: Data theft, exploitation of labour, bias and environmental concerns

  • Generative AI is constructed with data that has been scraped from the internet. Much of this work is copyrighted and represents the work of artists. There are several lawsuits underway that contest the legality of how this data has been acquired and used. Beyond the legal questions, ethically, the practice of taking data and using it without permission is reminiscent of historical, extractive colonial practices. It replicates practices that have been applied to other resources, such as land. 
  • The vast data set used to train generative AI contains bias. This shows up in a number of ways in the user experience, which we’ll discuss in more detail under how it's used.
  • There is exploitation in the labour supply chain for generative AI. The data used to train generative AI must be pre-processed. It sometimes needs to be labelled and made ready for machine learning models. Much of this work is off-shored, conducted by poorly paid gig workers. In some cases these workers are subjected to traumatic imagery.
  • Generative AI incurs a massive carbon footprint. Research illustrates how these big models use a lot of data that requires more data centres. Even so, there needs to be more social and political discourse that considers the vast amount of power and resources AI needs in order to function. If governments are going to support climate resilience on one hand but support AI's power use at the same time, then we have further issues of ignorance, power and politics, double standard, and of course environmental harm on a large scale to consider.
  • The volume of data sets that contain what we would consider bias and discrimination far outnumber data sets that don’t. Antiquated data outnumbers newer data. Therefore, despite our attempt to have cleaner and unbiased data to train AI on, the overall volume is in many cases still influenced by antiquated viewpoints. It is important to consider how dated the data is per data set volume compared to more recent and equitable data sets, which are scarce and incomplete. Often data that is less than or incomplete may contribute to bias, not from what it contains, but for what it omits.

Who controls it: Generative AI consolidates power

  • Only a handful of companies in the world have the means to develop the core (frontier) models that underpin a lot of generative AI. Organizations that develop these models are typically aligned with a cloud provider. OpenAI and Mistral have partnerships with Microsoft, Google and Meta host their own models and Anthropic has partnered with Amazon Web Services. The sheer amount of compute required to run these models requires billions in Cloud infrastructure. Other players typically leverage an underlying model rather than build new models from scratch. This means that power is consolidated in the hands of a few big corporations.
  • There is another layer of consolidation involving the hardware used to produce artificial intelligence models. Nvidia is the company that has developed the processing power necessary to run these large models. There are many geopolitical implications surrounding the issue of computer chips for AI systems.

It’s worth noting that individual users will not solve the ethical issues related to how generative AI is designed or who owns it. There are opportunities for collective action and ways to apply political reforms to address these issues with regulation. However, individual users will need to grapple with these issues as part of their ethical deliberation in terming how or if they wish to use these technologies. 

 

How its used: Generative AI can introduce risks for users

  • Generative AI can ‘hallucinate’, making up information that is inaccurate or untruthful. This may not be an issue for a creative work but it can be harmful if the content is something that requires veracity. Some Generative AI vendors in fields such as legal tech, are addressing this problem with specially developed tools that ‘fine-tune’ or place guardrails on a model. These are not the same as free versions of generative AI like ChatGPT. 
  • There can be copyright issues related to the information you use to prompt generative AI systems like ChatGPT. Generally speaking, do not put any information into these systems for which you are not the rightful owner, or for which you have been granted specific permission to use. Do not assume that the fair use provisions for artistic or educational purposes apply to information used in a commercial generative AI system. In addition, don’t share personal information, sensitive or confidential information. Be aware of privacy concerns and treat the information you put into these public facing generative AI systems as if it were information being published on the internet. If you have a specific tool that has explicit privacy protections built in, such as an enterprise version of a tool, there may be more latitude with respect to information you could use as a prompt. 
  • There is a dangerously large gap between AI literacy and the potential harm from it. Using Generative AI is not the same as using Google search, and yet so many people, including professionals, remain vastly unaware of the factors at hand. "AI Awareness" campaigns are needed to account for this gap, from the local municipal and public sector levels to a global level. AI tools are advancing daily while AI Awareness has barely been addressed.
  • Automation bias is a phenomenon whereby we over rely on machines, even when there is information to suggest the machine might be wrong. It's important to remain vigilant and to question the outputs of the system, and to be aware that AI systems are only one form of knowledge generation. In addition, automation bias encourages the loss of human autonomy and critical thinking. Scale this to a social and political level, and concerns of democratic autonomy are at risk.
  • Generative AI can produce content that is biassed. This shows up in imagery that can produce and reinforce stereotypes, text that reflects gender or racial bias, and various forms of misrepresentation. These biases are part of the DNA of generative AI, which has been trained on datasets that are skewed, non-representative and reflect historical bias. Even trying to address bias, as Google famously attempted with their Gemini model, produced new kinds of issues as well as questions around the degree to which organizations should apply filters to user prompts. Users should be aware of these issues when using these tools.

 

Resources

 

ChatGPT: Is it for me?

The Backwardness of Generative AI: Making Exploitation Cool Again

AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks

Government of Canada Guide on the use of Generative AI

AI and the Arts: 5 Steps to a Responsible Generative AI Policy

Rozsa Foundation: AI Commitments and Guidelines

Nvidia Faces Copyright Infringement Lawsuit Over AI Training Data

Humans Absorb Bias from AI—And Keep It after They Stop Using the Algorithm

A poster’s guide to who’s selling your data to train AI

Fairly Trained - certified AI models that compensate for training data

Comparative Philosophies in Intercultural Information Ethics

Historical bias in AI systems

Created by pallison. Last Modification: Thursday June 27, 2024 17:30:55 EDT by pallison.