<ms-einstein-ai:chat-answer-prompt
doc:name="Chat answer prompt"
doc:id="66426c0e-5626-4dfa-88ef-b09f77577261"
config-ref="Einstein_AI"
prompt="#[payload.prompt]"
/>
Configuring Chat Operations for Einstein AI Connector 1.2
Configure the Chat Answer Prompt Operation
The Chat answer prompt operation sends a request to the configured LLM. This operation uses a plain text prompt as input and responds with a plain text answer.
-
Select the operation on the Anypoint Code Builder or Studio canvas.
-
In the General properties tab for the operation, enter plain text for the Prompt.
-
In Additional properties, enter these values:
-
Model name
Select the model name. The default is
OpenAI GPT 3.5 Turbo. -
Probability
Enter the probability of the model staying accurate. The default is
0.8. LocaleEnter the localization information, which can include the default locale, input locale(s), and expected output locale(s). The default is
en_US.
-
This is the XML configuration for this operation:
Configure the Chat Generate from Messages Operation
The Chat generate from messages operation is a prompt request operation to the configured LLM, with provided messages. This operation accepts multiple messages and uses a plain text prompt as input. The operation responds with a plain text answer.
-
Select the operation on the Anypoint Code Builder or Studio canvas.
-
In the General properties tab for the operation, enter plain text for the Messages.
-
In Additional properties, enter these values:
-
Model name
Select the model name. The default is
OpenAI GPT 3.5 Turbo. -
Probability
Enter the probability of the model staying accurate. The default is
0.8. LocaleEnter the localization information, which can include the default locale, input locale(s), and expected output locale(s). The default is
en_US.
-
This is the configuration XML for this operation:
<ms-einstein-ai:chat-generate-from-messages
doc:name="Chat generate from messages"
doc:id="94fa27f3-18ce-436c-8a5f-10b8dbfa4ea3"
config-ref="Einstein_AI"
messages="#[payload.messages]"
/>



