ChatGPT
Last updated
Last updated
The ChatGPT plugin allows you to send text to ChatGPT as a prompt, and get ChatGPT's answer as a text classification.
As part of the prompt sent to ChatGPT, you may choose to send the entire text of the asset if the asset is text, the text that was highlighted as part of an entity annotation, or custom text written by a user into a Text classification box.
From the Plugin Directory, search for ChatGPT and install the plugin to your organization. More information on installing plugins can be found in the Installing Plugins page.
The name of the plugin is ChatGPT
and the creator of the plugin is onur (at) imerit.net
.
First, ensure you have created at least one text classification class in your project, to which ChatGPT will output its generated text.
Then, navigate to the project where you'd like to use ChatGPT.
If you'd like for ChatGPT to automatically receive and process tasks, you may integrate it into your workflow.
From the Workflow tab in your project, drag a Plugin stage into the workflow view, and plug it where you'd like it to be. Then, click on the stage you've just placed and select "ChatGPT" from the list of models available.
In the example above, ChatGPT will receive tasks from the Start stage, process them, and send them out to a labeling stage. More info on the other settings in the plugin settings panel can be found below.
You may run the ChatGPT plugin to generate text directly from the labeling editor. After having added ChatGPT to your organization, enter a project, and click on the "Plugins" icon on the top toolbar. The ChatGPT plugin will be available in the dropdown.
Click on the ChatGPT text to run the plugin with its default preset (if any, click here for more information on presets). Otherwise, click on the three dots to the right to run the plugin with custom settings. The plugin settings dialog will appear. More info on the plugin settings that can be found in this dialog are below.
To pass the entire text of the asset as a prompt, first ensure that the asset is a .txt file.
From the plugin settings dialog, map ChatGPT's Reply Text class to the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.
In the example above, ChatGPT's output has been mapped to the "Customer Problem" text class in my project. ChatGPT will output its generated text there.
In the Config JSON field, ensure the get_prompt_body_from
property is set to asset
, and that mode
is set to text-prompt
.
The text of the asset will be passed as the body of the prompt.
If you'd like to add further instructions for ChatGPT on how to process the text, you may add them to the prompt_prefix
and prompt_suffix
properties. Any text added to the prefix will be shown to ChatGPT before the full text of the asset, and the suffix will be shown after the asset text.
The prompt ultimately sent to ChatGPT will be built as such:
prompt_prefix
+ text contents of asset + prompt_suffix
For example, you may instruct ChatGPT to summarize the asset like so:
To pass the text of entity annotations as a prompt, first ensure that the asset is a .txt file.
From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.
Then, map ChatGPT's Prompt Body entity class to the class you'd like to pass as prompt to ChatGPT. Text highlighted in text assets with this class will be part of the prompt. Here is an example mapping:
In the Config JSON field, ensure the get_prompt_body_from
property is set to entity-annotation
, and that mode
is set to text-prompt
.
If you'd like to add further instructions for ChatGPT on how to process the text, you may add them to the prompt_prefix
and prompt_suffix
properties. Any text added to the prefix will be shown to ChatGPT before the highlighted text, and the suffix will be shown after.
The prompt ultimately sent to ChatGPT will be built as such:
prompt_prefix
+ text contents of highlighted entities + prompt_suffix
For example, you may instruct ChatGPT to determine the sentiment of the highlighted text like so:
From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.
Then, map ChatGPT's Prompt Body class to the class you'd like to pass as prompt to ChatGPT. Text typed in this text classification tool will be part of the prompt. Here is an example mapping:
In the Config JSON field, ensure the get_prompt_body_from
property is set to text-annotation
, and that mode
is set to text-prompt
.
If you'd like to add further instructions for ChatGPT on how to process the text, you may add them to the prompt_prefix
and prompt_suffix
properties. Any text added to the prefix will be shown to ChatGPT before the text in the text classification class, and the suffix will be shown after.
The prompt ultimately sent to ChatGPT will be built as such:
prompt_prefix
+ text content of the text classification + prompt_suffix
For example, you may instruct ChatGPT to return the capital of the country typed into the text classification class like so:
The ChatGPT plugin can be used to perform NER annotation on text files.
From your project's Settings tab, enter the Category Schema section, and add as many Entity-type tools as necessary. The names of the entity tools are important as they will be embedded in the prompt passed to ChatGPT. Thus, please ensure your entity tool names are descriptive. (e.g. person, year, place, etc.)
In the Config JSON, set the mode
property to ner
, like so:
When using the ner
mode, the get_prompt_body_from
property will always be set to asset
regardless of what you manually set it to.
After following the above steps, whenever the plugin is run on a task, ChatGPT will perform NER annotation on the text asset according to the names of the Entity tools. You may additionally add custom prompt instructions before the asset text in the prompt_prefix
field and after the asset text in the prompt_suffix
field.
The prompt ultimately sent to ChatGPT will be built as such:
prompt_prefix
"My NER tags are "
[comma-joined list of the names of the entity tools. for example, "Vehicle, Color, Year"]
"Can you find the NER tags... (a longer prompt explaining to ChatGPT the exact JSON format in which it needs to return the annotations)
[body text of the asset]
prompt_suffix
From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like the store the chat history. The class being mapped to must be a text class.
Then, map ChatGPT's Prompt Body class to the class you'd like to pass as prompt to ChatGPT. Text typed in this text classification tool will be part of the prompt. Here is an example mapping:
In the Config JSON field, ensure the get_prompt_body_from
property is set to text-annotation
, and that mode
is set to chat
.
prompt_prefix
and prompt_suffix
parameters are disabled in this mode, so no values are needed for these parameters.
For example, you may instruct ChatGPT to initiate a conversation in AngoHub:
A sample conversation example:
Write your prompt in the 'Prompt' text field.
Save the annotations on the editor (Shortcut key: 'S')
Click Plugin Icon on the editor > Click ChatGPT (Make necessary configurations before)
ChatGPT's answer will appear on the Chat History text tool with a complete chat history
Repeat the process to answer ChatGPT
Using an image as prompt only works with OpenAI models supporting multi-modal input, like gpt-4-vision-preview
.
Using an image as prompt does not work with the default GPT 3.5 model.
Only .jpg and .png images can be passed to ChatGPT this way. .tiff images will not work.
To use an image as prompt, the ChatGPT plugin will need to be run on image assets. It will not work otherwise.
Images are limited to 20MB each. The ChatGPT plugin will not run on larger images.
From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.
If you wish to also add to the prompt text from a text classification, map that text classification class to ChatGPT's Prompt Body class.
Before running the plugin, ensure that in the Config JSON, the mode
is set to image-prompt
. If you wish to add to the prompt text from a text-type classification, after mapping the text classification to ChatGPT's Prompt Body class, set the get_prompt_body_from
property to text-annotation
.
If no text is provided in the prompt_prefix
and prompt_suffix
fields, by default, the prompt will be "What's in this image?" followed by the URL of the image.
This is a sample Config JSON to use ChatGPT to use an image as prompt:
The prompt ultimately sent to ChatGPT will be built as such:
prompt_prefix
[the contents of a text classification tool, if so mapped]
[the URL to the image]
prompt_suffix
By default, when you run the ChatGPT plugin on a task, it will erase all annotations present and then create its own.
If you wish for ChatGPT to preserve the existing annotations, deselect the overwrite option in Plugin Settings.
By default, if you do not provide an OpenAI API key, the plugin will use a default key provided by iMerit.
This default key is only meant to test the plugin and should not be used in production projects. When using the default key, only the gpt-3.5-turbo
model can be used. The default key is heavily rate limited, only providing a maximum of 3 requests per minute, 200 requests per day, and 20'000 tokens per minute globaly across the entire Ango Hub platform. By providing your own API key, you may bypass all limits and use any GPT model provided by OpenAI.
To provide your own API key, in the Config JSON, please provide the key in the openai_api_key
field, like so:
When you provide your own API key, you may choose to change the model used by the plugin. For example, to use GPT-4, enter gpt-4
in the model_name
field, like so:
Please ensure that the API key you are using is compatible with the model you have chosen.
For example, using a gpt-3.5-turbo
key as the API key and gpt-4
as the model name will cause the plugin not to work.
A list of all possible models that can be used can be found here:
If you have plugged ChatGPT into your workflow, ChatGPT will be activated every time a task is passed into its input plug.
If you wish to run ChatGPT directly from the labeling editor, follow the instructions in this section.
You may check the progress of the conversion from the Plugin Sessions dialog. More information on checking plugin progress here.
Open the plugin's Config JSON. If you are running the plugin from workflow, this will be available when clicking on the plugin stage. If running the plugin from the editor, this will be available when clicking on the three dots next to the plugin's name.