The OpenAI integration allows you to generate text and images based on your own prompts, using the artificial intelligence of OpenAI's language models. These models have the ability to comprehend and produce text and images for you. You can also analyze and manipulate existing text and images, depending on which features you leverage in Glide.
Don't see the OpenAI integration?
You may need to upgrade your plan. Browse Glide's plans and find the right fit for you.
A prompt in the context of OpenAI is a command you give OpenAI for it to generate text or images for you. Here is a quickstart guide on prompts.
Need to set up your integration? Get started here.
There are many possibilities with the OpenAI integration. You can:
This guide will break down each of these features and how you might leverage them with your Glide app.
OpenAI features can be set up in two ways: in the Data Editor or in the Actions Editor. All OpenAI features can be set up directly as an action. Two OpenAI features—Complete chat and Speech to Text—can also be set up in the Data Editor.
In the Actions Editor, the feature applies to the active row where the action is performed. The table associated with the action contains input and output columns.
There are several parameters you can add to your OpenAI commands to fine tune how the ai will respond. When working with artificial intelligence, it’s helpful to first understand what language model you’re interacting with so you can make decisions about how best to guide that model and make it the most impactful for your work. OpenAI has created several different models, some of which you may have heard of (like GPT-4 and DALL·E) and others which may be less familiar (like Whisper and Embeddings).
You can review the latest OpenAI models here.
Glide supports three different parameters. We call these Model Tweaks. All of these allow you to fine tune the responses you generate with whichever model you’re using.
Temperature is represented by a number between 0.0 and 2.0. OpenAI models are non-deterministic, meaning that identical inputs can yield different outputs. Higher temperature will make the output more random, diverse, and creative—but also possibly less relevant to the input prompt. Lower temperature will make the output more focused and deterministic, with only a small amount of variability remaining, so repeating the same input will give you very similar responses every time.
Maximum length, which must be a number below 2048, controls the maximum length of the generated text, measured in the number of tokens (words or symbols). A higher value will result in longer responses, but may also make the responses less coherent. Most models have a context length of 2048 tokens, except for the newest models, which support 4096 tokens.
Frequency penalty is represented by a number between -2.0 and 2.0. Frequency penalty controls the model's tendency to repeat itself or produce responses that are irrelevant to the input. It works by lowering the chances of a word being selected again the more times that word has already been used. A higher frequency penalty will discourage the model from repeating itself.
OpenAI’s GPT language models generate answers to the best of the model’s capabilities. The accuracy of these answers is not guaranteed.
With the Answer question about a table feature, users can ask a question about a table of data.
This feature is experimental and may provide approximate or erroneous answers.
Question (required): A question in text format.
Source table (required): The table the feature will analyze.
Additional context (optional): Give a secondary instruction to complete the question.
Temperature (optional number, defaults to 1): For most factual use cases, such as data extraction and truthful Q&A, a temperature of 0 is best.
Maximum length (optional number, defaults to 16): This is a hard cutoff limit for token generation.
Frequency penalty (option number, defaults to 0): Number between -2.0 and 2.0.
An answer to the question.
In the Data Editor, determine which table will be the data source for the action.
Create columns that will hold basic text values for the question and answer.
Optionally, create additional columns to hold basic text values for Query and Prompt, and basic number values for Temperature, Maximum Length, and Frequency Penalty.
In the Actions Editor, configure an Answer question about a table action.
Configure the action by pointing the fields to their associated columns.
Rules of Thumb:
Use the latest OpenAI model.
Be specific, descriptive, and as detailed as possible about the desired context, outcome, length, format, style, etc.
Model and temperature are the most commonly used parameters to alter the output.
The Generate image feature allow you to create images from scratch based on a text prompt. Note that images take 5-10 seconds to generate on average.
A description of the image to be generated
In the Data Editor, create a basic text column to store the image prompt and a basic image column to house the generated image.
In the Actions Editor, create a new action, select the Generate image action, and select the table where the generated image will be stored.
Select the prompt column you set up previous, or enter a manual prompt.
Glide will input the latest Dall-E model automatically, and you can change it if needed.
Point the Dall-E image field to the basic image column that will store the generated image.
Change the default size if desired.
Input a style if desired.
Select if you'd like the image to be HD.
The action, when run, will generate an image from the description or custom text.
You can embed the power of ChatGPT in your app to create your very own question and answer features. This feature can be configured either with or without the ability to reference the chat history when forming responses. If you’d like the feature to reference chat history, first follow this guide, then proceed to the next guide, Complete chat (with history).
A simple text prompt, question, or request.
A response from ChatGPT.
There are several ways to set up the Complete Chat feature in Glide. In this guide, we will set it up using the Comments component, as it provides the best chat experience for users.
First, create a chat message table in the Data Editor with the following columns:
If you plan to use Complete chat with history, make sure your column names match these exactly.
Timestamp: The time each message and answer are created.
Session ID: A unique value for each chat conversation (e.g. user ID, conversation ID, etc.)
Content: A text column to store the message sent to ChatGPT.
Result: A text column to store the response from ChatGPT.
User Name: A text column to store the name of the user who created the message.
User Photo: A image column to store the image of the user who created the message.
User Name and User Photo are only required if you are using the Comments component.
Next, we’ll set up the chat feature in the Layout Editor:
Create a new screen with the Comments component.
Connect it to Data:
Configure it’s Content using the fields you created in the last step:
Comment: The field that stores the user’s message (ie. Content)
Timestamp: The field that stores the time the message was created.
User photo: The field that stores the photo of the user that created the message.
User name: The filed that stores the name of the user that created the message.
Topic: The field that stores the unique identifier for the conversation. (ie. Session ID)
Finally we’ll create a custom action to send the user’s prompt to ChatGPT and store the response as a new message in Glide:
Within the Comments component's settings, create a new action for the AFTER SUBMIT ACTION.
Add an new Complete chat action specifying the user’s Message and where the Result from ChatGPT should be stored.
If you'd like your output in JSON format, check the box.
Follow that with an Add row action so that the response appears as a new message to the user.
Sheet: Your chat message table.
Timestamp: The current date/time.
Session ID: The unique identifier for the chat.
Content: The Result from the last message.
User Name: A custom name for your bot.
User Photo: A custom photo for your bot.
Finally, we added two Show notification actions to let our user’s know when the message was sent and when a new message was received.
If you'd like to allow users to send an image to ChatGPT, you can also add a an Image input column. This leverages GPT-4's Vision technology. Read more here. This can only be used if you select a model that supports Vision.
You can embed the power of ChatGPT in your app to create your very own chatbot. This feature can be configured either with or without the ability to reference the chat history when forming responses.
This guide will walk you through the additional configuration needed so that ChatGPT can reference your chat history when forming responses.
Message History: The table where all chat messages are stored.
Message: A simple text prompt, question, or request.
Session ID: The unique identifier for the conversation.
Result: The response from ChatGPT.
If you have not gone through the first guide, please reference the Complete chat guide before continuing. This guide continues what was set up there.
First, update your chat message table in the Add row:
Add a basic text column called “Role” to store the role of the user that created the message.
Messages from your user’s will have a role of “user”
Messages from ChatGPT will have a role of “assistant”
ChatGPT requires specific fields in order to reference the chat history. Make sure you have the following fields created and labelled exactly as follows:
Finally, update the action that is triggered when a new message is created:
Make sure your Message History has these fields: Timestamp, Content, Session ID, and Role.
Update the Add row action to set the Role of ChatGPT’s message to “assistant”
With the Speech to text feature, you can transcribe an audio recording into text. This allows you to leverage AI to generate usable text data from audio your users record and submit.
Input: An audio recording.
Output: A text transcription of the audio recording.
In the Add row, create a basic text column with the URL of the audio.
The action, when run, will transcribe the audio file and output the corresponding text.
The Text to Speech feature can be used as an action in the Layout or Action Editor. With it, you can convert written text to an audio file.
Output: URL for an audio file
In the Data Editor, create a column to store the input text and a column to store the URL result.
In the Layout or Action Editor, create a new action and select Text to Speech.
Use the column with the input text as your input.
Open the Options menu if you'd like to configure additional options.
Which audio model to use. You can learn more about OpenAI's audio models here.
Which voice to use. You can learn more about voice options here.
If you'd like to increase or decrease the speed of the audio.
The file format of the response. Learn more about format options here.
Use the column you created to store the Audio URL for the result.
The following features were deprecated in December 2023. Apps with existing computed columns and actions using these features will continue to work. However, if you delete an action or computed column that has been configured, you will not be able to restore it.
With the Analyze sentiment feature, you can identify whether a piece of text is positive, negative, or neutral.
Input (required text)
Text such as a word, sentence, or paragraph. This text should be fewer than ~3,000 words.
The terms “positive,” “neutral,” or “negative”.
The Analyze sentiment feature can be used as an action or as a computed column. To set up as a computed column:
Create a basic text column for the input text.
Create a new computed column and select Analyze sentiment as the type. You can search for this in the “Type” menu.
Set the Prompt field to point to the basic text column whose text is to be analyzed.
OpenAI’s Analyze sentiment model is used to identify whether a piece of text is positive, negative, or neutral. Some use cases for Analyze Sentiment might include:
Monitoring user comments to assess the likeability of your brand or products.
Improving customer support by identifying negative and neutral opinions.
Tracking the mood of employees by analyzing team member surveys and segmenting responses.
Analyzing user-generated content and ensuring tone consistency.
Turning sentiment analysis results into numerical values and performing roll-ups such as counts and averages for reporting.
Acting promptly on negative comments submitted in a feedback form.
With the Answer question feature, you can create a question-and-answer or chatbot feature within your app. Note that OpenAI’s GPT language models generate answers to the best of the model’s capabilities. The accuracy of these answers is not guaranteed.
Input (required text)
An answer to the question
Create a basic text column that will house the question to be answered.
Create a new computed column and select the Answer question column, which you will find in the Integrations group or by using the search function.
Set the Question field to point to the basic text column whose question will be answered.
The Complete prompt feature has limitless potential. You can ask for anything, from story and recipe ideas, to business plans, to character descriptions and marketing slogans. By providing a text prompt as a cue, the Complete prompt computed column will generate a text output that tries to replicate the context or pattern that was initially given. Depending on your prompt, the text output might continue the initial text prompt, transform it, or generate an entirely new text related to it.
If repeated, you might get a slightly different output even if your prompt input stays the same. This is because OpenAI’s language models are random. Setting the temperature to 0 will make the outputs mostly deterministic (less different), but a small amount of variability may remain.
Prompt (required text): The word, sentence, or paragraph to be completed.
Model (required text): The OpenAI API is powered by a diverse set of models with different capabilities. For instance:
text-davinci-003 can be used for text completion.
gpt-3.5-turbo is optimized for chat at 1/10th the cost of text-davinci-003.
Refer to OpenAI’s latest language models.
Temperature (optional number, defaults to 1): Number between 0 and 2.
Maximum length (optional number, defaults to 16): Does not control the length of the output, but a hard cutoff limit for token generation.
Frequency penalty (option number, defaults to 0): Number between -2.0 and 2.0. Positive values decrease the likelihood of the same strings of words being repeated verbatim.
A text output that tries to replicate the context or pattern that was initially given, such as a stories, recipes, business plans, character descriptions, or marketing slogans.
Create a basic text column that will house the Prompt.
Create a new computed column and select the Complete Prompt column, which you will find in the Integrations group or by using the search function. The values for the Model, Temperature, Maximum length, and Frequency penalty can be set within the configuration of the Complete prompt column, or you can create basic columns for each should you require further fine tuning of the output.
Set the prompt to point to the basic text column whose Prompt will be completed. Optionally, set the model, temperature, maximum length and frequency penalty.
The Correct grammar feature corrects the grammar of a block of text. This feature will only change text that is grammatically inaccurate. It will not edit for tone or other stylistic choices.
A block of text (sentence, paragraphs) with grammar to be corrected.
The same block of text with correct grammar.
Create a basic text column which will house the text whose grammar is to be corrected.
Create a new computed column and select the Correct grammar column, which you will find in the Integrations group or by using the search function.
Set the Phrase to point to the basic text column housing the text whose grammar is to be corrected.
The Extract keywords feature allows you to extract keywords from a block of text such as a sentence, paragraph, or series of paragraphs. The most used and most important words and expressions from the text help summarize the content and identify the main topics.
A block of text (sentence or paragraphs).
The most used and most important words and expressions from the text.
Create a basic text column to house the phrase or paragraph(s) whose keywords will be extracted.
Create a new computed column and select the Extract Keywords column, which you will find in the Integrations group or by using the Search function.
Set the Prompt to point to the basic text column whose keywords will be extracted.
The Suggest a color feature takes a prompt and suggest a color hex code.
Input: Simple text
Output: One single color in HEX color code format.
Create a basic text column which will house the text from which a color will be suggested.
Create a new computed column and select the Suggest a color column, which you will find in the Integrations group or by using the search function.
Set the prompt to point to the basic text column from which a color will be suggested.
The Suggest an emoji feature takes a prompt and guesses which emoji would bet go with that prompt.
Input: Simple text
Output: One single emoji
Create a basic text column that will house the text from which an emoji will be suggested.
Create a new computed column and select the Suggest an emoji column, which you will find in the Integrations group or by using the search function.
Set the prompt to point to the basic text column from which an emoji will be suggested.
The Summarize feature allows you to translate a difficult text into simpler concepts or to turn meeting notes into a summary.
Input: A block of text, notes, bullet points.
Output: An summarized or simplified version of the text.
Create a basic text column that will house the text to be summarized or simplified.
Create a new computed column and select the Summarize column, which you will find in the Integrations group or by using the search function.
Set the prompt to point to the basic text column whose text is to be summarized or simplified.