Ango Hub Docs
Open Ango HubContact iMerit
  • Ango Hub Documentation
  • Video Guides
  • Changelog
  • FAQs & Troubleshooting
  • All Keyboard and Mouse Shortcuts
  • Core Concepts
    • Assets
    • Attachments
    • Batches
    • Benchmarks
    • Category Schema (Ontologies)
    • Frame Interpolation
    • Geofencing
    • Idle Time Detection & Time Tracking
    • Instructions
    • Issues
      • Issue Error Codes
    • Label Validation
    • Labeler Performance
    • Labeling
    • Labeling Queue
    • Multiple Classification
    • Notifications
    • Organizations
    • Projects
    • Requeuing
    • Reviewing
    • Review Queue
    • Skipping
    • Stage History
    • Tasks
    • Usage
    • User Roles
    • Workflow
      • Complete
      • Consensus
      • Hold
      • Label
      • Logic
      • Plugin
      • Review
      • Start
      • Webhook
  • Labeling
    • Managing Users in Projects
      • Profile Page
    • Managing the Project Ontology
    • Labeling Editor Interface
      • Audio Labeling Editor
      • Image Labeling Editor
      • Video Labeling Editor
      • DICOM Labeling Editor
      • Medical Labeling Editor
        • 3D Bounding Box
        • Fill Between Slices
        • Island Tools
        • Line (Tape Measure)
        • Smoothing
      • PDF Labeling Editor
      • Text (NER) Labeling Editor
      • LLM Chat Labeling Editor
      • Markdown Labeling Editor
      • 3D Multi-Sensor Fusion Labeling Editor
    • Labeling Classes
      • Tools
        • Bounding Box
        • Brush
        • Entity
        • Message
        • Nested Classifications
        • PCT
        • PDF Tool
        • Point
        • Polygon
        • Polyline
        • Rotated Bounding Box
        • Segmentation
        • Spline
        • Voxel Brush
      • Classification
        • Checkbox
        • Multiple Dropdown
        • Radio
        • Rank
        • Single Dropdown
        • Text
        • Tree Dropdown Tools (Single and Multiple Selection)
      • Relation
        • Single Relation
        • Group Relation
    • Magnetic Lasso
    • Performance & Compatibility Considerations
  • Data
    • Data in Ango Hub
      • Embedding Private Bucket Files in MD Assets
    • Importing Assets
      • Asset Builder
      • Bundled Assets
        • Importing Multiple Images as One Multi-Page Asset
        • Importing Multiple Single-Frame DICOM Files as One Multi-Page Asset
        • Importing multiple DICOM files to be annotated and displayed at once
        • Importing Multiple Single-Frame DICOM Files as a DICOM Series
        • Importing Multiple Markdown Files as One Multi-Page Asset
      • File Explorer
      • Supported Asset File Types & Codecs
      • Importing Cloud (Remote) Assets
      • Importing From Local Machine
      • Creating and Importing LLM Chat Assets
      • Importing Data in the 3D Multi-Sensor Fusion Labeling Tool
      • Bulk Importing Markdown/HTML Assets
      • Importing Attachments during Asset Import
      • Importing Annotations during Asset Import
      • contextData: Adding Extra Data to Assets
      • Importing Reference Images as Overlay
      • Importing Reference Medical Data During Asset Import
    • Importing and Exporting Annotations
      • Importing Annotations
        • Ango Import Format
        • Importing Brush Traces
        • Importing NRRD Annotations
      • Exporting Annotations
        • Ango Export Format
          • Asset
            • Task
              • Tools
              • Classifications
              • Relations
          • Stage History
    • Adding and Managing LLMs
    • Storages
      • Set up a storage integration with Azure
      • Set up a storage integration with AWS S3
      • Set up a storage integration with MinIO and S3-compatible custom storage services
      • Set up a storage integration with GCP (Google Cloud Platform)
      • Set up CORS
      • Validating Storage Integrations
    • Purging Data from Ango Hub
  • Plugins
    • Overview of Plugins in Ango Hub
      • Installing Plugins
      • Plugin Setting Presets
      • Monitoring Plugin Progress
    • First-Party Plugins
      • Ango Export Converter Plugins
      • Asset Converter Plugins
      • Ango to Mask Converter
      • Batch Assignment
      • ChatGPT
      • Column-Agnostic Markdown Generator
      • CSV Export for Classification
      • DALL-E
      • DALL-E (Model Plugin)
      • File Explorer Plugin
      • General Object Detector
      • General Object Segmenter
      • Markdown Generator
      • One-Click Segmentation
      • Open World Object Detection
      • Optical Character Recognition
      • TPT Export
      • YOLO | Instance Segmentation
      • YOLO | Pose Estimation
      • YOLO | Object Detection
      • YOLO | Image Classification
    • Plugin Developer Documentation
      • Export Plugins
      • Batch Model Plugins
      • Model Plugins
      • File Explorer Plugins
      • Markdown Generator Plugins
      • Plugin Logger
      • [WIP] Deploying your Plugin
      • Plugin 'Host' Information
  • SDK
    • SDK Documentation
      • Project Level SDK Functions
        • add_members_to_project
        • assign_batches
        • assign_task
        • create_attachment
        • create_batch
        • create_issue
        • create_label_set
        • create_project
        • delete_issue
        • export
        • exportV3
        • get_assets
        • get_batches
        • get_issues
        • get_metrics
        • get_project
        • get_project_performance
        • get_task
        • get_tasks
        • get_task_history
        • import_labels
        • list_projects
        • requeue_tasks
        • rerun_webhook
        • update_workflow_stages
        • upload_files
        • upload_files_cloud
        • upload_files_with_asset_builder
        • upload_chat_assets
      • Organization Level SDK Functions
        • create_storage
        • delete_organization_invites
        • delete_organization_members
        • delete_storage
        • get_organization_invites
        • get_organization_members
        • get_storages
        • invite_members_to_org
        • update_organization_members_role
    • SDK - Useful Snippets
    • SDK Changelog
  • API
    • API Documentation
  • How-To
    • Add Members
      • Add multiple users to a project
    • Annotate
      • Annotate 3D Point Cloud Files on Ango Hub
      • Perform targeted OCR on images
    • Export Data
      • Automatically send Ango Hub Webhook contents to Google Sheets, Email, Slack, and more with Zapier
      • Download a JSON of your project ontology
      • Download DICOM Segmentation Masks
      • Download your annotations in the COCO, KITTI, or YOLO format
      • Download your Segmentation Masks
      • Get your export as separate JSON files for each asset
    • Manage a Project
      • Get your API Key
      • Get your Organization ID
      • Mute your notifications
      • Open an asset provided the Asset ID
      • Pre-label assets
      • Share a filtered view of the Tasks table with others
      • Transfer project ontologies between projects
      • Transfer project workflows between projects
    • Perform Model Evaluation on Ango Hub
  • Troubleshooting
    • I get a "0 Tasks Labeled" alert when trying to pre-label tasks
    • I get a 'The data couldn't be loaded properly' error when opening certain assets
    • I get a "Unknown Classification" warning when opening a task
  • Feature Availability Status for projects of the 3D Multi-Sensor Fusion type
  • Comparison between QuickServe and Ango Hub
  • Changes from Ango Hub Legacy
  • Video V2 Breaking Changes and Transition
  • Data Access, Storage, and Security
  • Two-Factor Authentication
  • Single Sign-On (SSO) Support
  • Customer Support
  • Ango Hub Status Page
Powered by GitBook
On this page
  • Installation
  • Setup
  • Integrating ChatGPT into the project workflow
  • Using ChatGPT from the labeling editor
  • Available Workflows
  • Passing the Entire Asset as Prompt
  • Passing Entity Annotation Text as a Prompt
  • Passing the text contents of a text classification as prompt
  • Performing Audio NER and Transcription with ChatGPT
  • Performing text NER Annotation with ChatGPT
  • Chat with ChatGPT
  • Using an Image as Prompt
  • Preventing Annotation Overwrite
  • Using your own OpenAI API key and changing the model used
  • Running ChatGPT
  1. Plugins
  2. First-Party Plugins

ChatGPT

PreviousBatch AssignmentNextColumn-Agnostic Markdown Generator

Last updated 5 months ago

The ChatGPT plugin allows you to send text to ChatGPT as a prompt, and get ChatGPT's answer as a text classification. You can also send it audio and have it perform entity recognition and transcription.

As part of the prompt sent to ChatGPT, you may choose to send the entire text of the asset if the asset is text, the text that was highlighted as part of an entity annotation, or custom text written by a user into a Text classification box.

Installation

From the Plugin Directory, search for ChatGPT and install the plugin to your organization. More information on installing plugins can be found in the page.

The name of the plugin is ChatGPT and the creator of the plugin is onur (at) imerit.net.

Setup

First, ensure you have created at least one text classification class in your project, to which ChatGPT will output its generated text.

Then, navigate to the project where you'd like to use ChatGPT.

Integrating ChatGPT into the project workflow

If you'd like for ChatGPT to automatically receive and process tasks, you may integrate it into your workflow.

From the Workflow tab in your project, drag a Plugin stage into the workflow view, and plug it where you'd like it to be. Then, click on the stage you've just placed and select "ChatGPT" from the list of models available.

In the example above, ChatGPT will receive tasks from the Start stage, process them, and send them out to a labeling stage. More info on the other settings in the plugin settings panel can be found below.

Using ChatGPT from the labeling editor

You may run the ChatGPT plugin to generate text directly from the labeling editor. After having added ChatGPT to your organization, enter a project, and click on the "Plugins" icon on the top toolbar. The ChatGPT plugin will be available in the dropdown.

Available Workflows

Passing the Entire Asset as Prompt

To pass the entire text of the asset as a prompt, first ensure that the asset is a .txt file.

From the plugin settings dialog, map ChatGPT's Reply Text class to the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.

In the example above, ChatGPT's output has been mapped to the "Customer Problem" text class in my project. ChatGPT will output its generated text there.

In the Config JSON field, ensure the get_prompt_body_from property is set to asset, and that mode is set to text-prompt.

The text of the asset will be passed as the body of the prompt.

If you'd like to add further instructions for ChatGPT on how to process the text, you may add them to the prompt_prefix and prompt_suffix properties. Any text added to the prefix will be shown to ChatGPT before the full text of the asset, and the suffix will be shown after the asset text.

The prompt ultimately sent to ChatGPT will be built as such:

prompt_prefix + text contents of asset + prompt_suffix

For example, you may instruct ChatGPT to summarize the asset like so:

{
  "mode": "text-prompt",
  "get_prompt_body_from": "asset",
  "openai_api_key": "<YOUR OPENAI API KEY>",
  "model_name": "gpt-3.5-turbo",
  "prompt_prefix": "Summarize the following text: ",
  "prompt_suffix": ""
}

Passing Entity Annotation Text as a Prompt

To pass the text of entity annotations as a prompt, first ensure that the asset is a .txt file.

From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.

Then, map ChatGPT's Prompt Body entity class to the class you'd like to pass as prompt to ChatGPT. Text highlighted in text assets with this class will be part of the prompt. Here is an example mapping:

In the Config JSON field, ensure the get_prompt_body_from property is set to entity-annotation, and that mode is set to text-prompt.

If you'd like to add further instructions for ChatGPT on how to process the text, you may add them to the prompt_prefix and prompt_suffix properties. Any text added to the prefix will be shown to ChatGPT before the highlighted text, and the suffix will be shown after.

The prompt ultimately sent to ChatGPT will be built as such:

prompt_prefix + text contents of highlighted entities + prompt_suffix

For example, you may instruct ChatGPT to determine the sentiment of the highlighted text like so:

{
  "mode": "text-prompt",
  "get_prompt_body_from": "entity-annotation",
  "openai_api_key": "<YOUR OPENAI API KEY>",
  "model_name": "gpt-3.5-turbo",
  "prompt_prefix": "Determine the sentiment of the following text. Only output a single word. Here is the text: ",
  "prompt_suffix": ""
}

Passing the text contents of a text classification as prompt

From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.

Then, map ChatGPT's Prompt Body class to the class you'd like to pass as prompt to ChatGPT. Text typed in this text classification tool will be part of the prompt. Here is an example mapping:

In the Config JSON field, ensure the get_prompt_body_from property is set to text-annotation, and that mode is set to text-prompt.

If you'd like to add further instructions for ChatGPT on how to process the text, you may add them to the prompt_prefix and prompt_suffix properties. Any text added to the prefix will be shown to ChatGPT before the text in the text classification class, and the suffix will be shown after.

The prompt ultimately sent to ChatGPT will be built as such:

prompt_prefix + text content of the text classification + prompt_suffix

For example, you may instruct ChatGPT to return the capital of the country typed into the text classification class like so:

{
  "mode": "text-prompt",
  "get_prompt_body_from": "text-annotation",
  "openai_api_key": "<YOUR OPENAI API KEY>",
  "model_name": "gpt-3.5-turbo",
  "overwrite": true,
  "prompt_prefix": "Here is a country name: ",
  "prompt_suffix": ". Only output the capital of this country with no other text."
}

Performing Audio NER and Transcription with ChatGPT

The ChatGPT plugin can be used to perform NER annotation and transcription on audio files.

  1. If you wish for ChatGPT to perform transcription, ensure you have created at least one text classification class in your project.

  2. In the Config JSON, set the mode property to audio-prompt, like so:

{
  "mode": "audio-prompt",
  "get_prompt_body_from": "asset",
  "openai_api_key": "",
  "model_name": "gpt-3.5-turbo",
  "prompt_prefix": "",
  "prompt_suffix": ""
}
  1. In the mapping section, pick "Reply Text" from the left dropdown, and your text classification class from the right dropdown.

After following the above steps, whenever the plugin is run on an audio task, ChatGPT will write the trasncribe text to the mapped text classification class, and it will create entity annotations, one for each word, on the audio task itself.

Performing text NER Annotation with ChatGPT

The ChatGPT plugin can be used to perform NER annotation on text files.

  1. In the Config JSON, set the mode property to ner, like so:

{
  "mode": "ner",
  "get_prompt_body_from": "asset",
  "openai_api_key": "<YOUR OPENAI API KEY>",
  "model_name": "gpt-3.5-turbo",
  "prompt_prefix": "",
  "prompt_suffix": ""
}

When using the ner mode, the get_prompt_body_from property will always be set to asset regardless of what you manually set it to.

After following the above steps, whenever the plugin is run on a task, ChatGPT will perform NER annotation on the text asset according to the names of the Entity tools. You may additionally add custom prompt instructions before the asset text in the prompt_prefix field and after the asset text in the prompt_suffix field.

The prompt ultimately sent to ChatGPT will be built as such:

  • prompt_prefix

  • "My NER tags are "

  • [comma-joined list of the names of the entity tools. for example, "Vehicle, Color, Year"]

  • "Can you find the NER tags... (a longer prompt explaining to ChatGPT the exact JSON format in which it needs to return the annotations)

  • [body text of the asset]

  • prompt_suffix

Chat with ChatGPT

From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like the store the chat history. The class being mapped to must be a text class.

Then, map ChatGPT's Prompt Body class to the class you'd like to pass as prompt to ChatGPT. Text typed in this text classification tool will be part of the prompt. Here is an example mapping:

In the Config JSON field, ensure the get_prompt_body_from property is set to text-annotation, and that mode is set to chat.

prompt_prefix and prompt_suffix parameters are disabled in this mode, so no values are needed for these parameters.

For example, you may instruct ChatGPT to initiate a conversation in AngoHub:

{
  "mode": "chat",
  "get_prompt_body_from": "text-annotation",
  "openai_api_key": "<YOUR OPENAI API KEY>",
  "model_name": "gpt-3.5-turbo",
  "prompt_prefix": "",
  "prompt_suffix": ""
}

A sample conversation example:

  • Write your prompt in the 'Prompt' text field.

  • Save the annotations on the editor (Shortcut key: 'S')

  • Click Plugin Icon on the editor > Click ChatGPT (Make necessary configurations before)

  • ChatGPT's answer will appear on the Chat History text tool with a complete chat history

  • Repeat the process to answer ChatGPT

Using an Image as Prompt

Using an image as prompt only works with OpenAI models supporting multi-modal input, like gpt-4-vision-preview.

Using an image as prompt does not work with the default GPT 3.5 model.

Only .jpg and .png images can be passed to ChatGPT this way. .tiff images will not work.

To use an image as prompt, the ChatGPT plugin will need to be run on image assets. It will not work otherwise.

Images are limited to 20MB each. The ChatGPT plugin will not run on larger images.

From the plugin settings dialog, map ChatGPT's Reply Text class with the class where you'd like ChatGPT to output its generated text. The class being mapped to must be a text class.

If you wish to also add to the prompt text from a text classification, map that text classification class to ChatGPT's Prompt Body class.

Before running the plugin, ensure that in the Config JSON, the mode is set to image-prompt. If you wish to add to the prompt text from a text-type classification, after mapping the text classification to ChatGPT's Prompt Body class, set the get_prompt_body_from property to text-annotation.

If no text is provided in the prompt_prefix and prompt_suffix fields, by default, the prompt will be "What's in this image?" followed by the URL of the image.

This is a sample Config JSON to use ChatGPT to use an image as prompt:

{
  "mode": "image-prompt",
  "get_prompt_body_from": "asset",
  "openai_api_key": "<YOUR OPENAI API KEY>",
  "model_name": "gpt-4-vision-preview",
  "prompt_prefix": "",
  "prompt_suffix": ""
}

The prompt ultimately sent to ChatGPT will be built as such:

  • prompt_prefix

  • [the contents of a text classification tool, if so mapped]

  • [the URL to the image]

  • prompt_suffix

Preventing Annotation Overwrite

By default, when you run the ChatGPT plugin on a task, it will erase all annotations present and then create its own.

If you wish for ChatGPT to preserve the existing annotations, deselect the overwrite option in Plugin Settings.

Using your own OpenAI API key and changing the model used

By default, if you do not provide an OpenAI API key, the plugin will use a default key provided by iMerit.

This default key is only meant to test the plugin and should not be used in production projects. When using the default key, only the gpt-3.5-turbo model can be used. The default key is heavily rate limited, only providing a maximum of 3 requests per minute, 200 requests per day, and 20'000 tokens per minute globaly across the entire Ango Hub platform. By providing your own API key, you may bypass all limits and use any GPT model provided by OpenAI.

To provide your own API key, in the Config JSON, please provide the key in the openai_api_key field, like so:

{
  "openai_api_key": "<YOUR_OPENAI_API_KEY>",
  "model_name": "gpt-3.5-turbo",
  ...
}

When you provide your own API key, you may choose to change the model used by the plugin. For example, to use GPT-4, enter gpt-4 in the model_name field, like so:

{
  "openai_api_key": "<YOUR_OPENAI_API_KEY>",
  "model_name": "gpt-4",
  ...
}

Please ensure that the API key you are using is compatible with the model you have chosen.

For example, using a gpt-3.5-turbo key as the API key and gpt-4 as the model name will cause the plugin not to work.

A list of all possible models that can be used can be found here:

Running ChatGPT

If you have plugged ChatGPT into your workflow, ChatGPT will be activated every time a task is passed into its input plug.

Click on the ChatGPT text to run the plugin with its default preset (if any, ). Otherwise, click on the three dots to the right to run the plugin with custom settings. The plugin settings dialog will appear. More info on the plugin settings that can be found in this dialog are below.

Open the plugin's Config JSON. If you are running the plugin from workflow, this will be available when clicking on the . If running the plugin from the editor, this will be available when clicking on the three dots next to the plugin's name.

From your project's Settings tab, enter the Category Schema section, and add as many -type tools as necessary. The names of the entity tools are important as they will be embedded in the prompt passed to ChatGPT. Thus, please ensure your entity tool names are descriptive. (e.g. person, year, place, etc.)

Open the plugin's Config JSON. If you are running the plugin from workflow, this will be available when clicking on the . If running the plugin from the editor, this will be available when clicking on the three dots next to the plugin's name.

If you wish to run ChatGPT directly from the labeling editor, follow the instructions .

You may check the progress of the conversion from the Plugin Sessions dialog. More information on checking plugin progress .

click here for more information on presets
Entity
here
in this section
Installing Plugins
plugin stage
plugin stage
https://platform.openai.com/docs/modelsplatform.openai.com