Saturday, 14 October 2023

Einstein Trust Layer

 Interacting with generative AI is a simple as sending a prompt to a large language model and getting a response.


Salesforce Introduced Einstein trust layer as part of the Einstein 1 Platform.


The trust layer is a secure intermediary for user interactions with LLM’s. Masking PII checking output toxicity etc.


Every prompt that runs through the trust layer starts out in the Einstein 1 platform and usually originates from one of our CRM applications.


Once the prompt hits the trust layer it goes through a multi-step process to ensure that the response is generated and sent back to the user.


  1. Secure Data Retrieval

   If the prompt uses merge fields to pull in record data for example from a contact. It pulls that record data from the page context in one instance or if that prompt starts out on the server it pulls via SOQL.


      2. Dynamic Grounding 


Dynamic grounding is able to bring business logic into the transaction itself.


Ex : 


If that prompt requires you to go out and pull data from flow or from data cloud you can bring in that additional context into to enrich the prompt and further grounded in detail so that it’s more knowledgeable about the case at hand .


   3. Data Masking


To prevent sharing PII we use data masking a detection tool identify sensitive data like government IDs replacing them with placeholders like Person_0,Person_1 etc.


Trust layer maintains this placeholder origin mapping for you.


Once masking is completed we apply additional ‘prompt Defense’ 


  4. Prompt Defense


Ensuring that model responses remain reliable and avoid misleading outputs.


Once prompt is secured it then proceeds to the LLM  Gateway.


LMM Gateway :


This manages connections with various model providers and on reaching out to the gateway the prompt roots to the necessary model.


If sent to external models like Open AI the data is encrypted and never stored externally.


  5. Zero Retention :


Open AI is our first LLM partner operates a zero retention basis for prompts.


They also have a Content moderation API that flags unusual or harmful content and alerts salesforce of this immediately.


Once the response has been generated from the LLM gateway it’s ready to present back to the user.


But first we want to take that response and ensure that it’s safe with the first step of the after generation process and that is toxicity detection .


  6.Toxicity Detection :


We are going to take that response and run it through our toxicity filter to ensure that there is no harmful content any negative language and make sure that it’s safe and secure for your users.


  7.Data Demasking :


We are going to take all of the data that was masked in the original step through the masking process and we’re going to rehydrate that through demasking so that all of the data like the name , first name and addresses are all put back into the response and then it’s ready to be presented back to the user.


8.Feedback Framework


We have feedback framework in place with api’s so that you can provide feedback to the generation so whether or not it was useful or not whether or not it was successful is all logged and will be used for us to retrain the models and ensure that we’re providing the responses that you need.


9.Audit Trail 


We allow you to store the prompt any actions that were taken the toxicity so that you can provide trusted generative AI at scale.


This is just the first step towards creating the trust between yourself your company and generative AI.




No comments:

Post a Comment