# LLM Chat Labeling Editor

Ango Hub provides a labeling editor with which users can have live conversations with LLMs, and with which existing LLM conversations can be annotated.

The LLM Chat Labeling Editor is opened when a user opens a LLM Chat-type asset. For more information on how to create and/or import such assets, please [read this docs page](/data/importing-assets/creating-and-importing-llm-chat-assets.md).

{% hint style="info" %}
This article will exclusively go over Ango Hub’s LLM Chat labeling interface. Features common to all labeling editors are instead [explained here](/labeling/labeling-editor-interface.md).
{% endhint %}

<div data-full-width="true"><figure><img src="/files/ZWO1sEe286flDOkz7U0e" alt=""><figcaption></figcaption></figure></div>

## Overview

### Supported Labeling Tools

The text labeling editor supports following labeling tools:

**Tools**

* Message

**Classifications**

* Radio
* Checkbox
* Single-Select Dropdown
* Multi-Select Dropdown
* Single-Select Tree
* Multi-Select Tree
* Text

**Relations**

* Single Relation
* Group Relation

## Pre-Existing Conversations and Live Conversations

As outlined in [the docs page on creating LLM Chat-type assets](/data/importing-assets/creating-and-importing-llm-chat-assets.md), there are two possible types of LLM Chat assets: "pre-existing" and "live" conversations.

Pre-existing conversations are conversations which already happened outside of the Ango Hub platform and you import them to be annotated.

Live conversations start empty, and the annotator has a live conversation with your LLM directly from Ango Hub.

### How to chat with your LLM live from Ango Hub

{% hint style="info" %}
This functionality is not available on private cloud and on-premise deployments of Ango Hub.
{% endhint %}

Once the LLM Chat-type assets have been created, if they are empty (of the "live" type), when the user enters the asset, they will be greeted with an empty message UI:

<div data-full-width="true"><figure><img src="/files/mxSMcW8NL7lT8n7oGD22" alt=""><figcaption></figcaption></figure></div>

To prompt the LLM, enter your text in the text area labeled *Enter prompt here...* and hit Enter or the Send button. The LLM will receive your prompt and answer.

#### Entering Markdown and LaTeX in prompts

You may use Markdown and, between dollar signs, LaTeX in your prompts. To preview the final, formatted version of your prompt, click on the "Markdown" icon to the left of the prompt text area:

<figure><img src="/files/BPyVhZKvZ07dTKxOsxfu" alt=""><figcaption></figcaption></figure>

If the model's outputs also contain Markdown or LaTeX, they will automatically be formatted. To view a message's output in plain text (unformatted), click on the *Switch to Raw Text* button below the message:

<figure><img src="/files/0tC1itHnDPUeMuyGf6Tv" alt="" width="375"><figcaption></figcaption></figure>

## How to Annotate LLM Chats on Ango Hub

Regardless of whether the chat is a pre-existing one or a live one, from the *Tools* section on the left-hand side of the screen, select a [*Message*](/labeling/labeling-tools/tools/message.md)-type tool. Then, click on the chat message you would like to classify. The classifications that have been nested under the Message-type tool will appear, and you will be able to answer them.

<div data-full-width="true"><figure><img src="/files/8h8poDA4u1Pq5KFRh1nX" alt=""><figcaption></figcaption></figure></div>

Click on *Save* or *Submit* when you are done.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.imerit.net/labeling/labeling-editor-interface/llm-chat-labeling-editor.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
