Developing Integrations Using Einstein
|
Cloud and Desktop IDE |
To jumpstart your integration development, use natural language prompts in Einstein for Anypoint Code Builder Generative Flows to generate flows for you. The engine for these generative flows is the pre-trained large language models (LLM) that exist within the Salesforce Shared Trust Boundary. These LLMs are at the core of the Generative Flows capabilities in Anypoint Code Builder.
Trust Layer
The Einstein trust layer bridges Anypoint Platform and the LLMs, as shown in the following flow:
1 | The Anypoint Code Builder user creates a prompt based on their use case and sends it to Einstein. |
2 | To minimize inaccurate responses (the AI "hallucination" effect), prompts are grounded with factual MuleSoft proprietary data and user context. |
3 | To ensure the data is sent safely to the external LLM, the prompt is sent to Open AI via the secure gateway. |
4 | To keep your data secure, all data remains in Salesforce. |
5 | After the LLM generates a response, the content is sent back through the secure gateway. |
6 | MuleSoft validates the generated output to ensure the generated flow works within the MuleSoft ecosystem. |
7 | Anypoint Code Builder logs the prompt and generated responses for monitoring before surfacing the validated response in Anypoint Code Builder. |
This feature uses generative AI, which is known to include inaccurate or harmful responses. Before implementing this feature, review the output for accuracy and safety. You assume responsibility for the output when making business decisions. |