Send Chat Message - Activities - Foundation 23.2 - Foundation 23.2 - Ready - Hyland RPA - external

Hyland RPA Activities

Platform
Hyland RPA
Product
Activities
Release
Foundation 23.2
License

Description

Sends a chat message to an OpenAPI model and retrieves the answer.

For a description of the available properties, see the API reference | Endpoints | Chat page on the Platform OpenAI site: Platform OpenAI Chat Properties.

Properties

Analyst

  • Description: Optional text for documentation purposes.

Common

  • Continue On Error: Continues the execution of the workflow even if an error occurs during the execution of the activity.

  • Delay Before: Delay time in milliseconds before the activity is executed. The default value is 250 milliseconds.

  • Delay After: Delay time in milliseconds after executing of the activity. The default value is 250 milliseconds.

  • Disable Log: Disables the logging functionality for this activity.

  • Disable Protocol: Disables the protocol functionality for this activity.

  • Exclude from Global Delay: Excludes this activity from the Global Delay functionality.

  • Timeout: Duration in milliseconds in which the activity tries to execute.

Input

  • Message: The message to send to the model.
Input [Expert]
  • Frequency Penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, reducing the likelihood that the model will repeat the same line verbatim.

    The default value is 0.

  • Maximum Tokens: The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the context of the model.

    If left blank or 0, the default value of unlimited tokens is used.

  • Presence Penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the likelihood that the model will talk about new topics.

    The default value is 0.

  • Stop: Up to 4 sequences where the API will stop generating further tokens.

  • Temperature: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

    The default value is 1. We recommend changing this property or the Top P property, but not both.

  • Top P: Number between 0 and 1. An alternative to sampling by temperature, called nucleus sampling, where the model considers the results of the tokens with Top P probability mass. So 0.1 means only the tokens comprising the top 10 % probability mass are considered.

    The default value is 1. We recommend changing this property or the Temperature property, but not both.

Output

  • Answer: The chat response of the model. For more information, see the the Detailed Response property below.

Output [Expert]

  • Detailed Response: A detailed response object which contains additional information about the response and token usage.