Tuesday, 27 December 2022

Dynamically Pass Bind Variables to a SOQL Query (Spring 23)

 New Database.queryWithBinds, Database.getQueryLocatorWithBinds, and Database.countQueryWithBinds methods, the bind variables in the query are resolved from a Map parameter directly with a key rather than from Apex code variables.

ex :

Map<String, Object> acctBinds = new Map<String, Object>{'acctName' => 'Acme Corporation'};

List<Account> accts = Database.queryWithBinds('SELECT Id FROM Account WHERE Name = :acctName',  acctBinds,   AccessLevel.USER_MODE);


Wednesday, 14 December 2022

Salesforce Flow elements

 1 Create = 1 DML

1 GET = 1 SOQL


1 Update without filter criteria = 1 DML

1 Update with filter criteria = 1 DML + 1 SOQL


1 Delete without filter criteria = 1 DML

1 Delete with filter criteria = 1 DML + 1 SOQL


Statements above count towards flow limits, let's be mindful of them


PS: the criteria in update and delete is like using a get element in the update or the delete.

If you use filter criteria in Update Records or Delete Records is will use a query.

object and field level security check in Apex

 "With Sharing" does not enforce object level and field level by default.


But there are other ways in which you can enforce them.


Here are couple of options:


1. While using an SOQL use "WITH SECURITY_ENFORCED" keyword to enforce field- and object-level security checks to fields and objects referenced in SELECT.


2. In Apex code, you can use sObject describe result methods and field describe result methods that check current user's permission. Methods are isAccessible, isCreateable, isUpdateable and isDeletable.


3. Use the stripInaccessible method to enforce field- and object-level data protection.


Sunday, 11 December 2022

Apex CPU Time Limit

 Usually, all the developers came across this term while developing some complex logic in apex. Apex has System.LimitExceptions and "Apex CPU time limit exceeded" is one of them.


Apex CPU time limits:

1] Synchronous Limit - 10,000 milliseconds (10 Seconds)

2] Asynchronous Limit - 60,000 milliseconds (60 Seconds)


To resolve this exception we can follow below points:


1] Avoid Multiple automation on a Single Object

2] Trigger Framework

3] Avoid multiple Validation Rules

4] Using Map based query

5] Use Async Apex (Future, Batch, Queueable, Schedulable)

6] Aggregate SOQL usage

7] Avoid Nested For loop

8] Avoid using process builder

9] Use Return Early Pattern


What is not counted for Apex CPU Time Limit?


1] Database operations like database operation for DML, SOQL and SOSL.

2] Wait time for Apex Callout

3] Aggregate SOQL


Sunday, 20 November 2022

Apex offers two ways to perform DML operations

Apex offers two ways to perform DML operations:


1. DML statements :


Eg : Insert Acc ;


This is more straightforward to use and result in exceptions that you can handle in your code.


2. Database class methods :


Eg: Database.insert(Acc, false);


By using the Database class method, you can specify whether or not to allow for partial record processing if errors are encountered. If you specify false for the second parameter and if a record fails, the remainder of DML operations can still succeed.


Instead of exceptions, a result object array (or one result object if only one sObject was passed in) is returned containing the status of each operation and any errors encountered.


So how do you choose between the two options available?


If you need the errors that occur during DML to be thrown as an Apex exception that immediately interrupts control flow --> then go for DML statements


If you need to allow partial success of DML. Using the Database class methods, you can write code that never throws DML exception errors. Instead, your code can use the appropriate results array to judge success or failure. --> then go for Database class methods


PS : Database methods also include a syntax that supports thrown exceptions, similar to DML statements.

Tuesday, 15 November 2022

New Salesforce Assert Class

 Using Assertion.Fail() to intentionally return a fatal error.

Using Assertion.isInstanceOfType() to assert object instance.


ex : 


try {

    Test.startTest();

    SomeClass.someMethod();

    System.Assert.Fail('Throw custom exception);

 } catch (Exception baseException) {

     System.Assert.isInstanceOfType(baseException, 

                                   SomeClass.SomeCustomException, 

                                   'Expected an instance of SomeCustomException');

 }

  

Using Assert.isNull() to assert the null values.


 ex :  

  Opportunity opp = OpportunityService.getRecentClosedOpportunity();

  System.Assert.isNotNull(opp, 'Expected the opportunity to not be null'); 

  System.Assert.isNull(calloutResponse, 'Expected a null response');


Using Assert.areEqual() instead of System.assertEquals() for clarity.


ex : 


Account acc = new Account();

acc.Name = 'Test Account';

insert acc;

System.Assert.areEqual('Test Account', acc.Name, 'Expected account name to be Test Account');


String sub = 'abcde'.substring(2);

System.Assert.areNotEqual('xyz', sub, 'Characters not expected after first two');


Using Assert.isTrue() instead of System.assert() for clarity.


ex:   

  

Boolean containsForce = 'Salesforce'.contains('force'); 

System.Assert.isTrue(containsForce, 'Expected true result.');

System.Assert.isFalse(containsForce,'Expected false result.');

System.Assert.isTrue(insertResults.isEmpty());


Note :


Write System.Assert.areEqual instead of Assert.areEqual and you can safely use the new classes.

Sunday, 13 November 2022

Upgrade Behavior Options of Unlocked Packages :

 Options when installing the unlocked packages.

1.Mixed Mode (Default behavior) :

Specifies that some removed components are deleted, and other components are marked deprecated.

2.Deprecate Only :

DeprecateOnly specifies that all removed components must be marked deprecated. 

The removed metadata exists in the target org after package upgrade, but is shown in the UI as deprecated from the package. 

This option is useful when migrating metadata from one package to another.


3.Delete :


Delete specifies to delete all removed components, except for custom objects

and custom fields, that don't have dependencies.

ex :

sfdx force:package:beta:install --package 04t... -t DeprecateOnly


Monday, 7 November 2022

Pre-requisites for creating Second-Generation Managed Packages

 1. Enable Dev Hub in your Org.

2. Enable Second-Generation Managed Packaging.

3. Install Salesforce CLI.

4. Create and Register Your Namespace


Note : Developers who work with 2GP packages need the correct permission set in the Dev Hub org.

       Developers need either the System Administrator Profile or the Create and Update Second-Generation Packages permission.

       

Namespace:


1.The namespace of a managed package is created in a namespace org and linked to the Dev Hub.

we can associate multiple namespaces to a single Dev Hub.

A namespace is linked with a 2GP when you run the force:package:create Salesforce CLI command And you must specify the namespace in the sfdx-project.json file.


2.Multiple packages can use the same namespace.


3.Multiple packages sharing the same namespace can share code using public 

Apex classes and methods @namespaceAccessible annotation.


 

How to register/link a namespace ?


Login to your Dev Hub org as the System Administrator or as a user with the Salesforce DX Namespace Registry Permissions.


Some of the orgs that you use with second-generation managed packaging have a unique purpose.


1.Choose your Dev Hub Org.

2.Namespace Org.

3.Other Orgs.



Purpose of Dev Hub :


As owner of all second-generation managed packages.

To link your Namespaces.

To authorize and run your force:package commands.

Note : Salesforce recommend that your Partner Business Org is also your DevHub Org.


Purpose of Namespace Org :


The primary purpose of the namespace org is to acquire a package namespace.

If you want to use the namespace strictly for testing,choose a disposable namespace.


After you create a namespace org and specify the namespace in it, open the Dev Hub org 

and link the namespace org to the Dev Hub org.


purpose of Other Orgs :


When you work with packages, you also use these orgs: you can ceate scratch orgs on the fly to use while testing your packages.

The target or installation org is where you install the package.

Behavior of Salesforce data in Apex tests

 1.Existing org data isn't visible to Apex tests.

2.As a best practice you should always create your own test data for each test.


Reasons to create your own test data :


1.Data unavailability

  your org might not have the data for both positive and negative tests.

2.Data inconsistency

  Data can be different between orgs and test runs.


Note : In some scenarios like testing report data or field history records 

        you might need to have access to our org data because you can't create 

         those records through apex.

       In those cases you can annotate your test class with 'seeAllData' is equal to true annotation.


  ex : @isTest(seeAllData=true)

       public class DemoCtrlTest{

          

          @isTest

          static void testAccountFilter(){

          

          }

       

       }  


3. Apex test data is transient and isn't committed to the database.


Note : which means the data is present only as long as your test is running.


4.Testsetup methods are helpful test setup methods are defined in a test class itself

   they take no arguments and no return a value.

   

   ex 

      @istest

      public class DemoCtrlTest{

      

      @testSetup 

       static void createTestData(){

         // create test data here

       }

      }


a. Test setup method is automatically executed first, before any test method.

b. Records created in the test setup method are accessible to all test methods.

c. Changes to records by a test method are rolled back before another test method is executed.

d. All records are rolled back after the test class finishes executing.


create test data using CSV file :


ex : @testSetup 

      static void creatTestData(){

        Test.loadData(Account.sobjectType,'Mock_Account_Data');

      }   


Test Data and Governor Limits :


Whenever a test class runs the test data creation and test execution both happens in the same transaction.   


ex : 

Test.startTest();

Test.stopTest();


The startTest() method marks the point in your test code where the test 

actually begins and stopTest() method marks the point where your test ends.


Any code that executes between these two methods is assigned a 'new set of governor limits '.

The code inside start and stop block has one set of limits and the code outside has another set of limits.

The code outside whether before or after that start and stop methods shares the same limit.


Wednesday, 26 October 2022

Deleting Salesforce Debug Logs

 Using SFDX:


COMMAND 1:


sfdx force:data:soql:query -q "SELECT Id FROM ApexLog" -r csv > apexlogs.csv


COMMAND 2:


sfdx force:data:bulk:delete -s ApexLog -f apexlogs.csv


Using Workbench:


Navigate to queries -> SOQL Query:


Select Bulk CSV and Input “Select id from ApexLog” to query Box then click Query.


Then download the CSV file by clicking on “download completed batch results” icon under Batches


Once you have the csv file: Navigate to data->delete. Then select from file, choose the downloaded file and click next.


Select Process records asynchronously via Bulk API and choose Object Type : ApexLog then click Confirm Delete. Done

How to add connected app in 2GP Managed Package?

 1.Create a connected App in Salesforce Instance where we maintain namespace.

2.Create a first-generation Managed package(1GP) and add the connected app.

it's fine if the connected app is the only component in the package.

Always use the same namespace as the 2GP package for the 1GP package.

3.Take note of the version number of the connected app,this number is needed later.

4.Upload the 1GP package to create a package version.

5.Promote the 1GP version to the released state.

6.Promoting the 1GP version allows the connected app to be included in a second-generation managed package.

we don't need to install the 1GP version into an org.

7.Now in our source code navigate to folder 'connectedApps' where we are trying to generate 2GP Package.


create an XML file with <connectedAppName>.connectedApp-meta.xml

and the body of the XML file should be as shown below.


<ConnectedApp xmlns="http://soap.sforce.com/2006/04/metadata">

    <developerName><namespace>__<connected_app_Name></developerName>

    <label>A Connected App</label>

    <version>1.0</version>

</ConnectedApp>


Now generate a 2GP package and promote. Then the connected app is automatically added to your 2GP Package.


The version specified in the source file is the version number of the connected app. Use decimal formatting when specifying the version number. 

The version number must match the version number of the connected app before it was added to the 1GP managed package.


Note:


When you add a connected app to a 1GP package and upload the package, the version number of the connected app is auto-incremented. 

For example, when version 1.0 of a connected app is added to a 1GP package, the package version increments the version number 

of the connected app from 1.0 to 2.0. When creating the source file for your 2GP package, specify the version number of the connected app 

before it was uploaded into a 1GP package, in this case, 1.0.


Reference:


https://developer.salesforce.com/docs/atlas.en-us.224.0.sfdx_dev.meta/sfdx_dev/sfdx_dev_dev2gp_connected_app.htm


Saturday, 15 October 2022

Salesforce : Order of Execution

 System Validation Rules (SVR)

Before-Save RT Flows

Apex Before Triggers

Runs Most SVR Again and all Custom Validation Rules

Duplicate Rules

Save Record but does not Commit.

Apex After Triggers

Assignment Rules

Auto-Response Rules

workflow Rules

Escalation Rules

Process/Autolaunched Flows

After-Save RT Flows

Rollup Summary Fields

Commits All DML Operations to Database

Post-Commit Logic Like Email Send


Sunday, 14 August 2022

Salesforce Shield - Data Monitoring and Data Encryption

 Data Encription :


The process of applying a cryptographic function to data that results 

in ciphertext,also known as encrypted data.


Salesforce Shield :


Salesforce Shield is a suite of products that gives you more control over your security and monitoring of sensitive data.


Platform Encription :


Platform encryption is Salesforce's product that gives you a point-and-click way to encrypt data at rest.

Platform encryption also allows you to select objects and fields that will be encrypted and what key and schema that will be used for the encryption.


Event monitoring :


Event monitoring is the counterpoint of that data security. 

It enables you to monitor how that data is accessed in the platform and also taken out of that platform. 

Within event monitoring, you can set up policies that will monitor specific criteria that has been met and proactively block or notify you when that data is being accessed.


Common Terms :


1.Data Encription keys :


Keys used to encrypt and decrypt data on the database.


2.Encription at Rest :


Data that is store on disk in an encrypted state.


3.Bring your Own Key :


When you're able to bring your own material to encryption.


Tenant Secret :


A piece of the encryption credential that is specific to your organization.


Note :


A tenant secret is an organization-specific secret used in conjunction with the master secret Salesforce has to generate the information needed to actually encrypt your data.


Shield Features :


1.Specify what fields & objects should be encrypted at rest.

2.Control over key permission

3.Bring your Own key.

4.Maintain most existing functionality with encryption*

5.Monitor key activities performed by your users.


Encription Schemes :


How does this encryption actually work?


Within Salesforce, they use two different types of algorithms. 


1.Probabilistic schema

2.Deterministic schema


Probabilistic schema :


This is the default encryption of Salesforce, and this is where data is fully randomized and is the most secure option.

Each bit of data is turned into a fully random ciphertext string every time it's encrypted.

Encryption generally doesn't impact users who are authorized to view the data.


The exception is when logic is executed in the database or when encryption values are compared to strings or each other. 

In these cases, because the data has been turned into a complete random, patternless string, filtering isn't possible. 

It's recommended to use probabilistic on fields that are not going to be used for filtering or comparisons such as Social Security numbers, 

phone numbers, etc.


Deterministic schema :


To be able to use filters when data is encrypted, we have to allow some type of pattern into our data. 

Deterministic encryption uses a static initialization vector so that encryption data can be matched to a particular field value. 

The system can't read that piece of data that's encrypted, but it does know how to retrieve the ciphertext that can stand for that piece of data.

The IV is unique to a given field in a given org and can be only decrypted with an org-specific encryption key. 



Classic Encryption vs Shield Encryption :


Features                 Classic Encryption       Shield Encryption


Encryption at Rest               Y                      Y

Native Solution                  Y                      Y

Masking                          Y                      -

Encrypt Standard Fields          -                      Y

Encrypt Custom Fields      Only in special field        Y

Encrypt Files                    -                      Y

Encrypt Search & Events          -                      Y


Tenant secret :


Generate tenant secret is where Salesforce generates everything for you on your behalf and manages everything inside of a protected encrypted schema.


Bring your Own Key :


Bring your own key, however, is the opposite of that where you come to the table with more information and manage that yourself and give Salesforce enough information to be able to encrypt the data with your own key.


Tenant secret Type :


1.Data in Salesforce

2.Search Index

3.Event Bus


Note :


1.Probabilistic algorithm


Probabilistic algorithm is the default encryption of Salesforce.

This is where data is fully randomized and is the most secure option. 

Each bit of data is turned into a fully randomized cipher text string every time it's encrypted. 

Encryption within Salesforce generally doesn't impact users who are authorized to view the data. 

The exceptions are when logic is executed in the database or when encryption values are compared to a string or to each other. 

In these cases because the data has been turned into a random patternless string, filtering is not possible.


2.Deterministic algorithm


To be able to use filtering when data is encrypted, we have to allow some patterns into our data. 

Deterministic encryption uses a static initialization vector, or also known as IV, so that encryption data can be matched to a particular field value. 

The system can't read a piece of data that's been encrypted, but it does know how to retrieve the cipher text that stands for that piece of data. 

The IV is unique for each given field in a given org and can only be decrypted with your org‑specific encryption key.


Within deterministic, there's two subtypes.


1.case-sensitive deterministic

2.case-insensitive deterministic


Note : This is very important when you're using deterministic that you choose this correctly. Otherwise, you will not get the results you're expecting within your filters.


Event Monitoring :


Event Monitoring within the Salesforce Platform is a granular detail view of how users and how the system is performing at an event level. 

Every time an action is placed or a record has changed, what Salesforce called as an event is created within the platform. 

Within the Salesforce Shield, you have a granular view of being able to monitor what's happening within the platform. 


1.monitoring activity

2.Increase Adoption

3.Optimize Performance


within Event Monitoring, they have something called transaction security policies.

These are the policies that allow you to monitor or take actions on certain types of data interaction with the system.


1.Condition Builder

Condition Builder actually allows you to apply these rules with no code and with the interface.

2.Apex

The second is you can apply Apex to your transaction security policies to get a fine-grained way of controlling what notifications on what field and objects these Event Monitoring events are actually occurring.


Within the transaction security policy, there are four types of notifications.

1.Block

This block allows you to block a user's interaction completely when they've done a specific thing, such as try to load a report that has more records than you've allowed.

2.multi-factor authentication

The second is require a multi-factor authentication so that a user can prove that they are actually who they say they are. 

3.Email Notification

A simple notification to your system admins or a group of individuals so you can understand what's happening in real time.

4.In-app notification

An In-app notification back to that system admin or a group of admins to make sure you understand what's happening. 


Tableau CRM for Event Monitoring :


This platform gives sales, service, and the other core applications inside of CRM the ability to have advanced analytics and gives you the ability to slice and dice and create tables and visualizations that are above and beyond the standard reporting and dashboarding tools inside of core.


Benefits of the Event monitoring App


1.Easy Access

2.Visual

3.Filter & Facet

4.Shareability


Wednesday, 6 July 2022

Internationalization properties in LWC

 To make components Internationalize you can use Internationalization Properties in Salesforce as shown below

import internationalizationPropertyName from @salesforce/i18n/internationalizationProperty;

The Property values will be returned for current user.



1.Internationalize Locale Date

ex:

import LOCALE from "@salesforce/i18n/locale";

@track date = new Date(2022, 6, 25);
@track formattedDate;
this.formattedDate = new Intl.DateTimeFormat(LOCALE).format(this.date);
  
2.Internationalize Currency
ex:
import LOCALE from "@salesforce/i18n/locale";
import CURRENCY from "@salesforce/i18n/currency";

@track number = 10050.5;
@track formattedCurrency;

 this.formattedCurrency = new Intl.NumberFormat(LOCALE, {
      style: "currency",
      currency: CURRENCY,
      currencyDisplay: "symbol"
    }).format(this.number);
    
3.Internationalize Time Zone    

ex :
import TIMEZONE from "@salesforce/i18n/timeZone";

@track timeZone;
this.timeZone = TIMEZONE;

Monday, 20 June 2022

Enable a LWC component to render in light DOM

 render lwc components outside of the shadow tree.


-> Enables global application styling

-> Ease 3rd party tools integration.


<template lwc:render-mode='light'>

    <my-header>

        <p>Hello World</p>

    </my-header>

</template>


import { LightningElement } from 'lwc';

export default class LightDomApp extends LightningElement {

    static renderMode = 'light'; // the default is 'shadow'

}


No Shadow root is created within the component.

Styles aren't scoped anymore.

Event's are retargeted.

<slot> elements aren't rendered in the DOM.

Second-generation managed packages

 SFDX :

Salesforce Developer Experience(DX) is command line interface which provides tools

to manage data and metadata on salesforce environments.


DH: Dev Hub :

Organization feature which provides access to create and manage scratch orgs and 

create and manage second generation package.


Package :

Bundled container of code and metadata which can be published on AppExchange or shared directly to subscribers.


2GMP :


Second-Generation managed package,created,developed and managed by SFDX CLI.


1GMP :


First-Generation managed package also known as classic package,created and managed from Packaging or Patch Orgs.


ISV :

Independent Software Vendor


The company, who releases the managed package.


NS :

Namespace is the prefix used by managed packages to isolate the metadata scope.


AppExchange :

Salesforce marketplace where packages can be found for installation or published by ISV.


Security Review :


Salesforce acknowledgement of managed package required for publishing on AppExchange.


NO :

Namespace Org, the org where namespace is registered.


PKO :


Packaging Org, the org where the package is created, built and registered.


PTO :

Patch org, the org where the patch version of package is developed and built.


Sbo :


Subscriber Org, the org where your subscriber clients install and use your package.


Sco :

Scratch Org i the source driven temporary disposable organization.


Push Upgrades :


Feature, which allows to push upgrades to the subscribers without their consent.


Package Version :


Package snapshot, which is defined by Major,Minor,Patch and Build Version.


How 2GMP are different from 1GMP ?


1. What is the source of truth ?


Source of truth : Packaging Org vs VCS


2.Who is the owner of package and metadata?


Owner of package and metadata : Pko vs DH+VCS.


3.How many packages may belong to an Org?


Number of Packages per org : 1 vs many


4.Where is the namespace registered?


namespace : PKO vs Namespace Org(NO) linked to DH.


5.How many packages may share namespace?


Number of package per NS : 1 vs many


6.Which are the options to share code?


Share code : global or @namespaceAccessible


ex :

@namespaceAccessible

public with sharing class JQ {


public class InvalidJSONQueryException extends Exception{}


@testVisible Map<String,Object> internalRepresentation;

@namespaceAccessible

public JQ(String data){

  internalRepresentation=(Map<String,Object>)JSON.deserializeUntyped(data);

}


}


7.Can package create or uninstall be automated?


SFDX commands can be used to create or uninstall.


8.Is branching supported in package versioning?


Package versioning : linear vs branching


9.How patch versions can be created?


Patch versions: Patch Org vs VCS



which functionality is supported in 1 GMP but not in 2 GMP?


1.Components can't be deleted from packages.

2.Package versions can't be deprecated.

3.Apex versionProvider isn't supported.

4.A default language for labels in packages can't be specified.


Q)SFDX CLI command to create a new package version fails to be create because of exceeding limit Package2VersionCreates,

but developer needs to create a new version urgently for a client.What developer can do?



sfdx force:package:version:create --skipvalidation


Include --skipvalidation switch in the SFDX CLI command

Skip validation during package version creation;

you can't promote unvalidated package versions


However,unvalidated package verions have a separate limit which has 

much higher value even for free developer orgs.The value of Package2VersionCreatesWithoutValidation is 500

While the value of Package2VersionCreates is 6.


Sunday, 19 June 2022

Transaction Security Policies in Salesforce

 Transaction Security is a feature that monitors Salesforce events in real time and applies actions and notifications based on rules you create. These rules, or policies, are applied against events in your org.(ex : our policy was to have no more than three active sessions per user.) You create policies for certain event combinations, and specify actions to take when those events occur.

Using Transaction Security Policy, you can define events to monitor and take action when that event happens. 

Here are a few examples of the events that you can monitor.


1.You want to block and notify the administrator when somebody tries to export the ‘Contact’ information

2.You want to raise the session security to Two-Factor Authentication (2FA) 

  when a user tries to access Salesforce from two different IP Address within the last 24 hours

3.You want to block the access when someone tries to login from a particular country or from a particular operating system or browser

4.You want to block chatter posts containing particular keywords

5.You want to limit the concurrent number of sessions for a user or for an administrator


And when these events occur, you can take these actions


1.Block – Don’t let the user complete the request

2.Two-Factor Authentication – Step up the security and prompt the user to confirm identity by using two-factor authentication, such as the Salesforce Authenticator app

3.Freeze user – Prevent further logins into your org by the user.

4.End session – Prompt the user to end an existing session when the number of concurrent sessions a user is allowed to have is strictly limited.

Note :

 Transaction Security is a framework that intercepts Salesforce events in real-time and applies appropriate actions and notifications  based on the security policies you create. 

 

 Transaction Security Policy requires purchasing 'Salesforce Shield' or 'Salesforce Event Monitoring' add-on subscriptions. 

 

Saturday, 18 June 2022

PICKLISTCOUNT Function in Salesforce

 Salesforce has an undocumented function called PICKLISTCOUNT. This function returns the number of     selected values in a multi-select picklist. This function is helpful in validation rules.

Use case: Salesforce users with ABC profile can select only one value in a multi-select picklist field.

Validation Rule: PICKLISTCOUNT( MultiSelect_Picklist__c ) > 1 and $Profile.Name is ABC

Sunday, 5 June 2022

Restriction Rules and Scoping Rules

 Restriction Rules :


Using restriction rules we can apply an additional level of filter on top of records to which a specific user is having access.



Scoping Rules :


Scoping rules help to filter the default records visible for a user based on specific criteria. But it is not preventing access to other records.





Sunday, 22 May 2022

Second-Generation Managed Packages

 what are packages ?


1.Package is a container of meta data.

2.It contains a set of related features,customizations and schema.

3.Use packages to move metadata from one Salesforce org to another.

4.As you add, remove or change the packaged metadata,you create many package versions.

5.Each package version has a version number, and subscribers can install a new package version into their org through a package upgrade.

6.you can distribute the package to your customers via AppExchange also.





Pre-requisites for 2GP Package :


1.Enable Dev Hub in your Org.

2.Enable Second-Generation Managed Packaging.

3.Install Salesforce CLI.

4.Create and Register your Namespace.

5.Assign the required permissions to the Developers-

Developers need either the System Administrator profile or 

the Create and Update Second-Generation Packages permission in Dev Hub Org.


Dev Hub Org :


-> As owner of all Second-generation managed packages.

-> To link your namespaces.

-> To authorize and run your force:package commands.


Namespace Org :


-> The primary purpose of the namespace org is to acquire a package namespace.

-> After you create a namespace org and specify the namespace in it, link the namespace org to the Dev Hub Org.


Other Orgs :


-> you can create scratch orgs on the fly to use while testing your packages.

-> The target or installation org is where you install the package.



Create a package :

=================


Create Package :


sfdx force:package:create --name DreamDemo --path source-folder --packagetype Managed


Package Version Creation :


sfdx force:package:version:create --package "DreamDemo" -- installationkey test1234 --wait 10 --codecoverage


Promot the Package :


sfdx force:package:version:promote -package "DreamDemo@0.1.0-1"


Package Ancestors :

==================

-> Only package versions that have been promoted to managed released state can be listed as an ancestor.

-> When we define an ancestor for the packages they inherits the manageability rules of the specified ancestor package.


Note :

Specify the package ancestor in the sfdx-project.json file using either the ancestorVersion or ancestorId attribute. Use the ancestor that’s the immediate parent of the version you’re creating.



"packageDirectories": [

        {

            "path": "util",            

            "package": "Expense Manager - Util",

            "versionNumber": "4.7.0.NEXT",

            "ancestorVersion": "4.6.0.1"

        }

    ]



"packageDirectories": [

        {

            "path": "util",            

            "package": "Expense Manager - Util",

            "versionNumber": "4.7.0.NEXT",

            "ancestorId": "04tB0000000cWwIAE"

        }

    ]

    

"packageDirectories": [

        {

            "path": "util",            

            "package": "Expense Manager - Util",

            "versionNumber": "4.7.0.NEXT",

            "ancestorId": "expense-manager@4.6.0.1"

        }

    ]

    

Best Practices :


1.Work with only one Dev Hub, and enable Dev Hub in your patner business org.

2.Include the --tag option when you use the package:version:create and package:version:update commands.

This option helps you keep your version control system tags in sync with specific package versions.

3.Create user-friendly aliases for packaging IDs and include those aliases in your Salesforce DX project file and 

when running CLI packaging commands.

4.Make use of Scratch Orgs and other source control systems for development instead using the Namespace Org.

5.Always test using Beta package before you actually upgrade package.


Advantages and Gaps of 2GP :

============================


1.Define one Namespace for your company and use it for every package you develop.

2.With 2GP, you can build your application as separate modules.

Smaller functional modules are always easy to develop, test and maintain.

3.Can make use of the flexibility development with the help of source control systems that you use.


Gaps :

1.Component's can't be deleted from packages.

2.Package versions can't be deprecated.

3.Apex VersionProvider isn't supported.

4.A default language for labels in packages can't be specified.


Comparison of 2GP and 1GP Managed Packages








Sunday, 15 May 2022

Salesforce Async SOQL

 What is Async SOQL and when to use it?


What is Async SOQL?

Async SOQL is a method to run SOQL queries.


When to use Async SOQL?

When you have to query from a large amount of data( Big Objects containing more than 1 million records) and then copy that data to another custom or standard object to examine the data set easily.


How to use Async SOQL?

You need to create a POST request with the query as the body.


Example of the POST request:


URI: yourSalesforceInstance/services/data/v38.0/async-queries/


Body:

{

   "query": "SELECT field1__c, field2__c FROM CustomObject1__c WHERE probability >90",

                       

   "targetObject": "CustomObject2__c",    

        

   "targetFieldMap": {"field1__c":"field1__c", 

                      "field2__c":"field2__c"

                     }                         

}


Explanation:

- The above POST request will be sent to the org. 

- The SOQL query will retrieve the records and then create a copy of the records in the target object according to the field mappings mentioned in the body.


Tuesday, 3 May 2022

MuleSoft 4 Fundamentals

 We need software that integrates multiple systems seamlessly.

API-led Connectivity :

API-led connectivity is a methodical way to connect data to applications through reusable and purposeful APIs. 

These APIs are developed to play a specific role, unlocking data from systems, composing data into processes, or delivering an experience.


Three Types of API's :


1.System APIs

2.Process APIs

3.Experience APIs


System APIs :


System APIs, these usually access the core systems of records and provide a means of isolating the user from the complexity of any changes to the underlying systems. 

Once built, many users can access data without any need to learn the underlying systems and can reuse these APIs in multiple projects. 

Think of APIs to access a database, at least the common operations or APIs to ease the access of certain errorless resources.


Note : Encapsulate data systems into an API.


Process APIs :


These APIs interact with and shape the data within a single system or across systems, breaking down the data silos, 

and are created here without dependence on the source systems from which the data originates. 

They only call the system APIs, and, as well as they don't depend on the target channels for which the data is delivered. 

A given process API, you have to think of a transformation, the data coming, for example, from the database to a certain schema delivered by a 

certain web page. Regardless of the web page API and the database API, in the middle, the logic is the same.


Note : Aggregate and process the result of System APIs.


Experience APIs :


Experience APIs are how data can be reconfigured so that it is most easily consumed by its intended audience, all from a common data source rather than setting up separate point-to-point integrations from each channel. 

An experience API is usually created with API-first design principles where the API is designed for this specific user experience in mind.


Note : Expose the data for frontend



API Language :


There are many, but the main ones are Swagger, or you may hear Open API 3.0, and RAML v1. 

RAML is native to MuleSoft.


RAML is a YAML-style language to define an API. 


Main Changes from Mule 3 :


1.Migrating MEL to Dataweave 2.0

2.Added Design Center

3.Flow Designer

4.Exchange


Monday, 2 May 2022

Salesforce Platform Events

 1.Platform events are a special type of object (__e).

2.Platform events only support create/read permission.

3.No SOQL support,you must replay events to retrieve.

4.Fields limited to base data types only (i.e text,date,number,Boolean).

5.you can subscribe to events via Apex,Process Builder and CometD utility (Java or javascript).


Note :

The what happens with the platform event in an event bus is that you have decoupled your information publishers and your information subscribers.


The publisher of information doesn't have to know anything about anybody that's listening. And the subscriber, 

if it happens to fail or have an exception or something like that doesn't walk anybody else from performing actions. 

That's one of the real powers of this platform event model.


The only way to get events is to subscribe to them and potentially to republish them, 

which has to go through the entire stream events from that point onward.


Platform events cannot be edited their immutable.


Platform events have two publication types :


1.Publish Immediately disregards any form of rollback.

2.Publish On Commit honors transaction controls.



Note :

1.You can publish a platform event immediately, which means it hits that bus and it goes. No matter what else is happening in the system.

2.The second option is to wait until after the committee of the current transaction to publish them. 

If you're trying to do something because if you're trying to throw a platform event because a record change successfully, 

you'd want to wait till after commit But if I have an heir, I wanna publish that thing right away.


Note :

1.'Processes' utilize the SAME user that fired the event.

2.'Triggers' utilize the AUTOMATED PROCESS user.


Asynchronous processing :

=========================

It's now known has changed data capture what happens with change. 

Data capture is any record event that happens on that particular object has a platform event published, 

and the body of that event is the fields that are changed.


Events are processed in batches that can scale up quickly.

-> Monitor limits in the transaction and pick up where you left off 

with TriggerContext.setResumeCheckpoint(replayId).

-> Abort and retry a batch from the beginning by throwing an EventBus.RetryableException.

Sunday, 1 May 2022

Salesforce Omni-channel

 Omni-channel Channel Types :


Omni-channel is capable of working with two types of channels :


1.Real-time Channels 

2.Asynchronous Channels 


1.Real-Time Channels :

Real-Time Channels are Channels where the person asking support is expecting a real-time answer to their request.

For example, Phone calls or live chat.

2.Asynchronous Channels

Asynchronous Channels are Channels where the person asking support is expecting a reply to their request at a later time.

For example emails or contact forms.


Omni-channel Routing Destinations :


Omni-channel can route Work Items to 3 different destinations.

1.Route to Queue

2.Route to Skill

3.Route to Agent 


1.Route to Queue :


Work items fall in a Queue of Users where Users are then picking up work first-in first-out.


2.Route to Skill :


Work Items are Assigned to the Agents that have the proper skills to work on the request.


3.Route to Agent :


Work Items are assigned to a specific Agent directly.


Omni-Channel Agent Properties :


Agents have properties that Omni-Channel uses in order to select which agent is 

better suited to receive a work Item based on its Routing Destination.


1.Status & Capacity

2.Skills

3.Queue Membership


1.Status & Capacity :

Agents all have a Status,which models their availability, and a Capacity,which models the maximum amount of work they can take.


Examples of Statuses are : Away,Available,On Break


2.Skills :

Agents have a set of Skills that they can leverage to take certain work items.

Work Items can require one or multiple skills to be assigned.


Example of Skills are : English, 2nd Tier,Robot Butler Maintenance


3.Queue Membership:


Agents can be assigned to specific queues,and thus work on Work Items that fall in those Queues.


Examples of Queues are : Billing,Complaint


Omni Supervisor Features :


Supervisors get access to the Omni Supervisor function in Omni-Channel in order to monitor their team easily.


1.Monitor Agents 

2.Monitor Work Backlogs

3.Monitor Assigned Work


Monitor Agents :


Supervisors can monitor the Agents they have access to through the Supervisor Configuration assigned to them.


This includes,the Agent status and status timeline,the Agent's current work and open capacity,how long they've been logged in.


Monitor Work Backlogs :


Supervisors can monitor specific Skill & Queue backlogs to gauge how much work is left to be assigned to agents.


This allows them to have simple visibility on where to focus their efforts.


Monitor Assigned Work :


Supervisors can monitor work that's being worked on.


Using specific channel functionalities,they can even listen in on phone calls,

or see what agents are typing in the chat even before they've sent it.


TelePhony in Salesforce Service Cloud :


1.Open CTI


Open CTI is a telephony javascript API allowing vendors of telephony systems to develop integrations to interact with a 

Salesforce Softphone directly on the end-user's browser.


2.Vendor-Specific Implementation


Vendors also have the possibility to integrate deeper into Salesforce by integrating

with specific Service Cloud functionalities like Omni-Channel or Service Cloud Voice.


Vendor TelePhony Integration Levels :

1.Simple Integration-pure Open CTI integration


A vendor that integrates with Open CTI will only provide a Web-based softphone client for end-users to 

interact with. This means the integration is limited and doesn't integrate with Omni-Channel.


2.Partial Integration -Open CTI with additional functionalities 


A vendor can also add additional functionalities to their integration.

In that case, you have to look at your vendor's documentaion to ensure 

Omni-Channel is included as part of their additional functionalities.


3.Complete Integration - Service Cloud Voice 


A vendor that maintains a Serice Cloud Voice implementation is the most complete 

telephony integration you can find as this leverages Salesforce as the central place

for all telephony actions to happen.


TelePhony & Omni-Channel Integration Benefits :


1.Bi-Directional Agent Status Syncing

One of the key benefits of integrating your telephony in Omni-Channel is that the agent's status 

synchronizes on both systems,allowing your agents to do it in one place only.

2.Handle Phone Calls right from Omni-Channel

 Once integrated,phone calls can be managed inside of Omni-Channel instead of your agents having to manage 

 two different windows,one for the softphone , and the other for Omni-Channel.

3.Record & Transcribe Phone Calls, and much more

 If your vendor implements those functionalities,you can even record and transcribe phone calls right from Salesforce.

 Other functionalities like 'Supervisor Listen In,'Queued Callbacks' or 'Mean Opinion Score'

 are functionalities your vendor cloud provide to you too.

 

Salesforce Chat :


To set up Chat in Service Cloud and route it to Omni-Channel.

 

1.Omni-Channel

Used to route the chat conversations based on your Omni-Channel routing.


2.Emabedded Service 


Used to embed your Salesforce Chat wherever you want (ex: website,app).


3.Einstein Bots


Used to filter your chats and gather pre-information before they arrive to your agents.


4.Agent Chat Console


Used by your agents in order to interact with the chats they have open with customers.

Sunday, 3 April 2022

Dealing with View All Data & Modify All Data access in Salesforce

 Do you know if you remove View All/Modify All access from the Object level in profile, the View All Data/Modify All Data system permission gets removed as well.

For example, if you have enabled View All Data permission in a profile, it will give view all access for all the available objects. Now, if you remove view all access from one of the object, it will disable the View All Data access.

So, we need to be extremely careful while dealing with View All Data & Modify All Data access.

Sunday, 27 March 2022

GraphQL

 GraphQL is a query language for your API.

It has been developed as a more flexible and efficient alternative to REST.


GraphQL - Query language for API


1.Provides clients the power to ask for exactly what they need and nothing more.

2.GraphQL APIs get all the data your app needs in a single request.

3.Language agnostic - Plenty of client and server libraries are avaialble.



Note :

In GraphQL we can compose the request in the form of a GraphQL query and ask

for exactly what I need to build the app.It then responds with the JSON object with exactly what I asked for.


No multiple round-trips like REST. No Over-fetching or under-fetching.


REST vs GraphQL :


REST :

1.Multiple round trips to collect the information form multiple resources.

2.Over fetching and under fetching data resources.

3.Frontend teams rely heavily on backend teams to deliver the APIs.

4.Caching is built into HTTP spec.


GraphQL :

1.One single request to collect the information by aggregation of data.

2.you only get what you ask for.Tailor made queries to your exact needs.

3.Frontend and backend teams can work independently.

4.Doesn't use HTTP spec for caching (Libraries like APollo,Relay come with caching options).


Sunday, 13 March 2022

Salesforce Experience Cloud Deployments

 There are two key deployment methods when it comes to deploying your Salesforce community.

1.Change sets 

This is a point and click-based toolset that represents a list of customizations that can be deployed 

to any connected org within your Salesforce organization.

Note :

1.Point-and-click-based.

2.Under Setup menu in Salesforce.

3.Migrate changes between your orgs.


There are some considerations when it comes to change sets.

-> Experience Template Changes :

The first of those is Experience template changes. So if you make changes to your template within, say, stage or dev, 

you're going to need to make those changes manually in the upstream environment before deploying your broader change set.

-> audience target :

So if you make any updates to any of your audiences, to any assignments around audiences, 

you're going to need to make those manually within that upstream environment.



2.Metadata API

This is more of a code-based tool set that allows you to deploy a set of customizations more programmatically to any org that you choose.


1.Code-based

2.Utilizes app or command line

3.Migrate from one org to another.

4.Experience Bundle 


what the ExperienceBundle allows us to do is it allows us to extract granular site metadata so that we can quickly update and deploy bits 

and pieces of our site without having to deploy the entire site at once. So the ExperienceBundle is a nice way, if you've made a very small change 

to your site and you just want to deploy that change, the Metadata API using the Experience Bundle allows this to happen.


Experience Cloud Moderation :


There are three core areas when it comes to moderation.

1.moderation criteria

2.moderation rules

3.moderation settings


Note :

Essential for healthy site collaboration.

core areas of moderation.

-> Member and content criteria.

-> Rules to block ,review ,replace and flag content.

-> user content flagging.

-> Moderation workspace.


moderation criteria :


Within Salesforce communities, there are two types of moderation criteria.


1.member criteria

2.Content criteria


-> Member Criteria :

Member criteria utilizes member information to designate which member to target with a moderation rule. 

This could be the type of member such as customer, partner or internal or even based upon their profile. 

This can also be based upon their join date or whether they've posted or commented in the community. 

-> Content Criteria :

Content criteria is all about keyword searching. This could be used to protect from profanities being used in the community 

or flagging competitor names or certain keywords that you want to call out from a moderator's perspective.


Moderation Rules :


There are two types of moderation rules within Salesforce communities.


1.Content rules

2.Rate rules


-> Content rules :


Content rules are used to moderate the content that's being posted within your community.

This is comprised of a couple different settings.

 a.content type

   Do you want this rule to apply to a post or a comment or both.

 b.moderation action

   This is where you choose one of several actions, whether to block, flag, replace or review that item. 

 c.member message

  This is where you can give a message to the member, notating the action you're taking and why you're taking it.

 d.member's and content criteria 

   So utilizing that member and content criteria you have previously set up, you configure them here within your rule, 

   and it will only apply to those members and those keywords you've previously designated

   

-> Rate rules :


Rate rules are there to help limit the number of times somebody has posted within your community. 


a.content type 

This is comprised of content type. So again, do you want this to apply to a post, a comment, as well as other content types such as private messages 

or files? Then what members do you want it to apply to?

b.member criteria

So you can reuse your member criteria here within a rate rule to determine which members this rate rule should apply to.

c.rate limits

Now there's several different options when setting your rate limits.

The first of those is the time period you're looking at, and that can either be 3 minutes or 15 minutes. 

And then you're going to set how many times are they posting within that time period before you notify a moderator and then before you freeze them as a user.

  

what moderation actions are available within our moderation rules?

->  block   

This prevents the content from being published at all. 

So if it contains a keyword, it will be blocked, and that user won't be able to post it until they remove that particular keyword.


Note : Prevents content from being published.


-> review

This allows a moderator to review a post or a comment before it is published within the community. 

So it allows the user to post it., but it puts it in a review status to where only a moderator or the poster can see that post 

until a moderator approves it within the moderation workspace.


Note : Allows Moderators to review before being published.


-> replace

This allows publishing, but replaces the keywords that are found within that post or comment with asterisks. 


Note : Allows Publishing but replaces keyword(s) with asterisks.


-> flag

This allows publishing, but it automatically flags that content for a moderator's review. So it's still available within the community. 

Other members are able to see that, but it's flagged so the moderator can review it and take action on it, if necessary. 


Note : Allows publishing but automatically flags content.



Moderation settings :


Moderation settings are a couple areas where I can further my moderation within my community.


-> user content flagging

This is a checkbox under community administration where I can allow my users to flag posts as they see fit to bring to a moderator's attention. 

So if something slips through a moderation rule, a user could flag it, and it would appear within my moderation workspace as a moderator for me 

to review and take action on. 

-> file types and sizes 

This is where I can set the types of files and the sizes of those files that can be uploaded to my community as a moderator.

-> moderation workspace


Home/Overviewpage :


This is where a moderator would go to do their work. They would get an overview page to see how many pending discussions 

are out there or how many flagged posts are out there right from a Home and Overview tab.

moderator Page :


That's where I can go to my moderate page where I can review the list of items I need to moderate as a moderator, 

and then I can take action on them, as necessary.


Rules Page :


The rules page where I can manage all of my moderation rules, my moderation content criteria, and my moderation member criteria.


Summary :


1.moderation is necessary for a healthy site.

2.moderation criteria for members and content.

3.moderation rules can be used for monitoring member-generated content.


Experience Cloud Analytics :


1.reports and dashboards package


The reports and dashboards package that Salesforce provides. 

This is a package on the AppExchange that comes with a set of dashboards, reports, 

and custom report types to help you get off the ground when reporting on your Experience Cloud site.


2.dashboard workspace


Once you have the package installed, then you're going to take a look at the dashboard workspace. 

This is the analytics landing page for your Experience managers where they're going to go for all things reporting on the Experience Cloud site. 


Sunday, 20 February 2022

Salesforce Data management & Integration

 1.Big objects

2.External Objects.

3.Canvas APP.


Big objects :

A big object stores and manages massive amounts of data on the Salesforce platform. 

We can archive data from other objects or bring massive datasets from outside systems into a big object to get a full view of your customers. 

Clients and external systems use a standard set of APIs to access big object data. A big object provides consistent performance, 

whether we have 1 million records, 100 million, or even 1 billion. This scale gives a big object its power and defines its features.


Big Object Use Cases

Some of the use cases for Big Object are: -


360° view of the customer— Extend your Salesforce data model to include detailed information from loyalty programs, feeds, clicks,

billing and provisioning information, and more. 

Auditing and tracking— Track and maintain a long-term view of Salesforce or product usage for analysis or compliance purposes.

Historical archive— Maintain access to historical data for analysis or compliance purposes while optimizing the performance of your 

core CRM or Lightning Platform applications.


Implementation - Key Consideration :-


Big Objects have few limitations which should be considered before making any decision: -


Big objects support custom Lightning and Visualforce components rather than standard UI elements home pages, detail pages, list views, and so on.

Big Object storage doesn't count against organization data storage limit. Based upon Salesforce edition, up to 1 million records are free. 

Additional record capacity can be bought in blocks (50M but can vary) and price is negotiable.

We can query big objects using standard SOQL or with Async SOQL. Async SOQL schedules and runs queries asynchronously in the background, 

so it can run queries that normally time out with regular SOQL. Async SOQL is the most efficient way to process the large amount of data in a 

big object. Async SOQL is available as add-on license.

Big objects support only object and field permissions (FLS and CRUD), not regular or standard sharing rules.

Search, Reports and Dashboards are not allowed on Big Objects - As Big Objects are designed for very large data volumes, 

these features are not yet available in Salesforce. However, there are a few workarounds for reporting on Big Objects : -

Use Einstein Analytics, which you can report on Big Objects

Summarize the information you want to report for Async SOQL, and store the result in an intermediate custom object. 

Then we can report on that custom object

Big Objects support only Lookup, Date/Time, Email, Number, Phone, Text, Text Area (Long) and URL data type. 

However, we can use workarounds — like creating a formula field on custom object and copying that value as text in the targeted Big Object.

Salesforce does not support flows, triggers, and process builder on Big Objects.

Big objects don't support encryption. If you archive encrypted data from a standard or custom object, it is stored as clear text on the big object.

You can’t use Salesforce Connect external objects to access big objects in another org.


How can We Query Big Objects?


If we know that you are querying small amount of records then you can use SOQL


And the another way is “Async SOQL”


AsyncSOQL: To manage millions and millions of records in your custom big objects salesforce introduced AsyncSOQL. 

But Async SOQL is included only with the licensing of additional big object capacity.


SOQL – Can retrieve records within the governor limits


Async SOQL – Can retrieve billions of records


2.External Objects:


How to Connect?

Salesforce Connect provides multiple options to connect to external systems with the help of the following adaptors:

1.OData 2.0 adapter or OData 4.0 adapter

2.Cross-org adapter

3.Custom adapter created via Apex


Note :

Record level access is not configurable for External Objects.

Record access ("all or nothing") is provided via Object Permissions at the Profile level.

For example, given External Object alpha__x a Profile could be configured to have "Read" access or no visibility at all,

depending on your business requirements.


External Object Relationships :


Two special types of lookup relationships are available for external objects: external lookups and indirect lookups.


Custom web Application run on external Server :


Salesforce Connect Limitations :

Salesforce Connect has some limitations that are important to remember and follow while using it:


The maximum number of external objects that can be created per organization: 100

The maximum number of joins per query across external objects and other types of objects: 4

The maximum length of the OAuth token issued by an external system: 4,000 characters

The maximum page size for server-driven paging: 2,000 rows


Canvas apps :


What is Canvas ?


1.Canvas is a set of tools and JavaScript APIs that allow for easy integration with a 3rd party application. 

2.It allows you to take your new or existing applications and make them available to your users as part of their Salesforce experience. 

3.Under the hood, Canvas apps are loaded into Salesforce through an iframe.


What is a Canvas app ?


A Canvas app is a special type of connected app within Salesforce that allows users to access the external system directly 

from within the Salesforce UI. Canvas includes tools that handle:


1.Authentication

2.Context

3.Cross-domain XHR

4.Events


The Canvas SDK :

The Canvas SDK is used from JavaScript in an app that supports JavaScript to access Salesforce data that the user has access to. 

The data requests that Canvas apps make happen in the context of the Salesforce user.


You will include the Canvas SDK in your external app in order to access Salesforce Data and publish/subscribe to events using the Streaming API. 

This is easily included from your Salesforce org using a URL:


<script type="text/javascript" 

src="https://curious-impala-skkpwh-dev-ed.lightning.force.com/canvas/sdk/js/52.0/canvas-all.js">

</script>

Sunday, 6 February 2022

Salesforce Bulk API Using PK Chunking

 Bulk API provides a programmatic option to quickly retrieve and load data from and to Salesforce. 

It is based on the REST principle and is optimized for managing large data volume. 

But if the table has more than 10 million records, the process will time out or hit errors. 

Here is where you need PK chunking. PK Chunking splits the query into smaller chunks automatically, thereby making the process easier and faster.


Salesforce support custom index own custom fields that will help us easily locate the rows without scanning

every row in the database.

Index points to the row of data.

They use the clolumns to identify the data row without scanning the full table.


Salesforce IDs:

This is the fastest way to find a record in the database by using the ID in the specific query.


Bulk API provides a programatic option to load or retrieve your org's data to and from salesforce.

Bulk API is optimised for retrieving a large set of data.

We can use it to query,queryall,insert ,update or upsert records.


If your table is massive then the bulk queries usually times out or gives errors as it finds it hard to complete the process.


What is PK Chunking?

->PK stands for Primary Key.

-> Feature enabled in spring '15.

-> Automatically split the query based on Primary Key.

-> Execute a query for each chunk and return the data.

-> Makes large queries manageable in Bulk API.


The query is divided into smaller queries,and each queries will retrieve a smaller portion of data

in parallel thereby making the process easy and faster.


Extract queries are run with successive boundaries,and the number of records to be retrieved by each query is called

the chunk size.


Each query retrieve the maximum number of records as the chunk size.


ex:

The first query retrieves is for the records to do a specified starting ID and the

starting ID plus the chunk size,and the next part is the next chunk of record

and the process will continue until all the data is retrieved.


When to use PK Chunking?

->Objects more than 10 million records to improve performance.

->When a bulk query consistently times out.


Supported Objects 


-> Standard (Not all Objects)

-> Custom

-> Sharing tables(If Parent is supported)

-> History tables(If Parent is supported)


Common Errors during Data Management


-> Query not 'selective' enough

Non-selective query against large object type(more than 100000 rows).

->Query takes too long 

No response from the server.

->Time limit exceeded

Your request exceeded the time limit for processing.

->Too much data returned in query

Too many query rows : 50001

Remoting response size exceeded maximum of 15 MB.


How to enable PK Chunking?


we need to add certain parameters to the Bulk API request headers to enable PK chunking.


Parameter 


1.Field name  : Sforce-Enable-PKChunking

2.Field values : TRUE -Enable PKChunking,FALSE-Disable PKChunking

3.chunksize     : Number of records in each chunk.Default-2,000,Maximum size-2,50,000

4.Parent       : Parent object when PK chunking for queries on sharing objects.

5.startRow    : 15/18-character Record ID.Lower boundary for the first chunk.


ex : Sforce-Enable-PKChunking: chunkSize=50000; startRow=00130000000xEftMGH


Limitations :


-> PK chunking cannot be enabled for queries with 

Order By

Filtering on any Id fields

Limit Clause

-> Enabling PK chunking in Dataloader is still an idea.

-> Each chunk is processed as a separate batch that counts towards your daily batch limit.

Salesforce Sharing and Security

 Profiles 

-Object and field security

-Org permissions

-One per user


Permission Set :


Permission sets has offered the ability to grant nearly all the same permissions as the profile, 

but usually are limited to one or two specific use cases, or one or two specific permissions per permission set.


-> Good for one off permissions

-> Multiple Per user


why Still Use Profiles?

1.Default record types are always assigned at the profile level.

2.Page layout assignments are also assigned at the profile level.


Permission set groups :


-> can contain multiple permission sets

-> Based on job function

-> Can included muted permissions


Record Access :

1.Organization-wide defaults (OWD)

Different access levels 

-> Controlled by Parent

-> Private

-> Public Read Only

-> Public Read/Write

External Sharing Model

-> Similar to internal sharing model

-> Access must be more restrictive or equal


2.Roles and role hierarchy

-> Vertical sharing

-> Vertical sharing with users above in hierarchy.

3.Sharing rules

-> Horizontal sharing with users (Bulk)

-> Based on ownership or criteria

-> Can share with role, role and subordinates,or group.

4.Manual Sharing 

-> Horizontal sharing (Single record)

-> Flexible sharing (Single Record) : Share records you own with specific users.

-> Sharing button on records : Unavailable if OWD are Public Read/Write.


Public Groups and Queues :


Public Groups :

Can segment group of users that need the same access;Can specify Grant Access using Hierarchies to share with hierarchy of group users.

Queues :

Can assign records to teams that share a workload:any queue member can take ownership of record owned by the queue.


Restriction rules:


Restriction rules function as the opposite of sharing rules, and it will remove access to records, but they will be applied after all other sharing has taken place.


5.Team sharing

People have consistent teams they work on records with 

-> Account

-> Opportunity

-> Case


-> Enabled by admin with roles added.

-> Team members specified by users.

-> Account,Opportunity and Case teams.


Account Team :

Salesforce Admin can enable Account Teams and create roles.

Users can specify their own account teams that they want to work with and assign roles.

Users can automatically add team to new accounts.

Team members get access to account.

-> Read

-> Read/Write

Team members can inherit access to child contacts and opportunities.


Opportunity Team :


Works similar to Account Teams.

Admin will enable for org and create roles.

Users can specify their own Opportunity teams.

Users will ger Read or Read/Write access to opportunities.

Team members are granted Read access to parent account.


Case Teams:


Case teams work similar to account and opportunity teams with some slight tweaks.

Admin will enable for org and create roles that determine access.


Note : But in the case of case teams, it's the roles that will actually determine what access a user will get. 


Team members can get Read or Read/Write access to case or be added as private members with no access.


Predefined Case Teams:

Admins can predefine case teams so that you can quickly add people who you frequently work with.


Predefined case teams are going to work similar to the account and opportunity teams except they have to be configured by the Salesforce administrator.



6.Enterprise Territory management

-> Specific use cases

-> Additional layer of security and sharing.


-> Configure Users 

Assign users to one or more territories.

-> Configure Accounts 

Assign account records to one or more territories.

-> Ahow it works 

Access determined by territories.

-> Additional Objects 

Configure opportunity,case and Contact access.

-> Define access 

Configure default access and territory specific access.


Enterprise Territory Management Access 

-> Available default access levels dependent on OWD.

-> Accounts (View,View/Edit,View/Edit/Transfer/Delete)

->Opportunities (No access,View,View/Edit)

->Contacts and Cases (No access,View,View/Edit)

-> Can set territory specific access when creating new territories.


Territory Hierarchy :


can create territory hierarchies by setting territories as parents to other territories.


Access Inheritance :


Child territory's access level is inherited by parent territories above it.


Note :

->Assign access based on territory

->Can create territory hierarchy

->Child territory's access is inherited by parent.


Salesforce Org Security Features :


1.Multi-factor Authentication(MFA)


Enforce security when logging in with MFA.



-> Profile level

Configure each profile's Session Security Level Required at Login to High Assurance.

-> Org level

In Setup, configure Session Security Levels in Session Settings to add Multi-factor Authentication to the High Assurance Column.


2.Trusted IP Ranges 


Defined at the org level : user will need to provide a security code received via email or text if outside the range.


-> Trusted Ips at org level.


3.Login IP Ranges


Defined at the profile level:user will be denied access if outside the range.


-> Login Ips at profile level.


4.Login Hours 


Defined at the profile level:user will be denied access if outside the range.


-> Login hours at Profile level.


Delegated Administration :


-> Assign limited adim permissions to users who aren't admins.

-> Create and edit users in roles and subordinate roles.

-> Add and remove profiles and permission sets.

-> Login as Users

-> Manage custom objects


Note : Grant admin permissions to non-admins

Saturday, 5 February 2022

Mulesoft and Anypoint Platform

 what is an API?

API stands for "Application Programming Interface".


-> API's deliver user requests to back-end systems & deliver responses back to the user.

-> API's act as a communication bridge between a product or service &

other Products or services without having to know how they are implemented.


Mule Runtime :


-> Mule Runtime is an integration engine that runs Mule apps.

-> Mule apps connect systems,services, APIs & devices using Mulesoft's API-led connectivity.

-> Mule Runtime supports domains & Policies.

-> The Mule apps,domains & Policies all share an XML domain-specific language.


Why would you use Mulesoft?

Mulesoft unifies apps,data & devices delivering a single view of customers,automates business processes

& builds connected experiences that power great digital experiences.


An eneterprises increase the amount of apps in use they need universal API Management.

Mulesoft has the Anypoint Platform for full API Lifecycle management.


What is the Anypoint Platform?


The Anypoint Platform is made up of many products & Services that help you with your full API Lifecycle.


1.Design & Build APIs as well as integrations across your enterprise.

2.Reduce time to market with APIs for partner & customer apps.

3.Automate security for threat protection at every layer.


Anypoint Platform Hosting Options :


1.Control Plane

2.Runtime Plane


Control Plane :

Where you design,deploy,manage APIs & Mule applications.

Runtime Plane :

Where your APIs & Mule applications are deployed as well as made available to users.


There are many options available for running Anypoint where you want to run it such as cloud,on-premises or containers.


1.CloudHUb

Mulesoft's Anupoint Platfrom PAAS solution

2.On-premises/IAAS

Run your own Mule servers on your own hardware being on-premises bare metal or VMs or VMs running in cloud IAAS.

Configure & run Anypoint Platfrom Private Cloud Edition(PCE) & Maintain all data storage,processing,transmission,& control plane functionality locally.

3.Kubernetes/Pivotal Cloud Foundry

->Anypoint Runtime Fabric(ARF) is running Anypoint as containers on Kubernetes.

->ARF can run on a pure K8s cluster or cloud managed K8s service such as Amazon Elastic Kubernetes Service(Amazon EKS),

Azure Kubernetes Service(AKS),or Google Kubernetes Engine(GKE).

-> Run Anupoint within the infrastructure provided by pivotal Cloud Foundry(PCF).

-> Deploy Mule applications to PCF using the Runtime Manager UI.

4.Mulesoft Government Cloud

A secure Paas,FedRAMP-compliant deployment environment hosted and managed by Mulesoft.


MuleSoft Licensing :


Annual Subscriptions OR Enterprise License Agreements


Licensing for Mulesoft is a annual subscription-based.

The Mulesoft plans are consistent regardless of deployment approach:On-premises,Cloud or a Hybrid of two.

Mulesoft licensing is driven by the number of cores needed to run APIs or apps.

A core is a unit of processing power,they can be physical or virtual & are priced the same.


Mulesoft Support Models :

1.GOLD

2.PLATINUM

3.TITANIUM


What is MuleSoft CloudHub?


CloudHub is the Platfrom as a Service(PAAS) component of Anypoint Platfrom.

CloudHub is the hosting of the Anypoint Platfrom components in Mulesoft's cloud.

With CloudHUb you can deploy Mule Apps,design & create APIs ,Integrate with on-premises apps,or cloud apps/services,identity integrations,secrets management,manage access,mointor & alert,hosted private exchange & more.


Mulesoft CloudHub Architecture 


Anypoint Runtime Manager is the interface to the Anypoint Platfrom &

is how CloudHub is accessed & Managed.


The CloudHub architecture includes two major components:


1.Anypoint platfrom services

2.Worker Cloud


Anypoint Platform Components :


The Anypoint Platform is unique in the API Platfrom landscape in that it can be used to develop & execute APIs as well as the ability to manage & orchestrate API-led integration across the enterprise.


1.Anypoint Design Center


Anypoint Design Center is a web based dev environment used to create API specifications,fragments & Mule apps.

It consists of two tools :

a.API Designer

b.Flow Designer


API Designer :


API Designer enables you to create API specifications in multiple modeling languages & create RAML API fragments.


Flow Designer :


Flow Designer,lets you create Mule applications to integrate systems into workflows.


2.Anypoint Studio

-> Anypoint Studio is Mulesoft's integration development environment(IDE) for building & testing APIs & Mule Apps as well as integrations.

-> Anypoint Studio is Eclipse-based & installs locally on a developers Computer supporting Windows,Linux and Mac.

-> You can build API Specifications & flows in Anypoint Studio.

-> From Anypoint studio you can handle many tasks some including :

Run an API locally

Deploy an API to CloudHub

Publish to an Exchange

Work with MUnit testing

Configure API specification files and Mule domains


what is the Mule App?


Mule apps perform system integrations.

Mule apps use components,connectors & modules as well as read,write & process data.

Under the hood Mule apps are XML

Mule apps can be developed with Anypoint Studio,Flow Designer or with other IDEs for the advanced developers.


The Foundation of Mule apps are components that execute business logic on messages that flow through your apps.

Mule apps are configured to run in the mule Runtime engine.


Mule Apps have three categories of components:

1.Core Components

2.Connectors

3.Modules


Core Components :


Core components support programmatic operations on Mule Apps such as flow control,error handling,

batch & transforming data flowing through your Mule Apps.


Connectors :

Connectors group components that were created integrate Mule Apps with external sources,

such as 3rd party API endpoints like Salesforce,SAP,Slack ect...


Modules 

Modules group components that were created to add flexibility Mule Apps allowing to aggregate values,

compress data,use Java features,processing JSON & Much More.


DataWeave Language :


DataWeave is MuleSoft's primary functional programming language used for transforming data.

DataWeave can also be used to configure MuleSoft components & connectors.

DataWeave is also available as a command-line tool.


Flows :


Flows contains a series of Mule components that receive or process messages.


A Flow consists of a sequence of cards,with each card representing a core component,connector,module or API.

Mule Apps have a scope of Flow & Subflow components this is where they process messages.

Mule Apps can have a single flow or subflows.

Subflows are typically used to divide a Mule App into Functional Modules or for error-handling purposes.

You can schedule Flows via the Runtime Manager or they can have Mule Sources like an HTTP listener to trigger the flows execution.


Share data with Experience Cloud and External Users

 Community Cloud is now Experience Cloud.

Used to build multiple customer touchpoints through digital experiences.

->Customer Service

->Partner Management

->External apps

Digital Experience :


If you are working with a new org that has not yet created a community, then you will have to enable digital experiences before you can start creating any sites.


Note :

New orgs will enable digital experiences.

-> Select a unique domain.

External sharing model different than internal model.


User Access Considerations :


which users need access ?


-> Internal users

-> Partners

-> Customers

-> Guest users


External users or customers will only have access to the customer site, 

and external users known as partners will only have access to the partner site.



External Login-based Licenses 


1.External Apps

2.Customer Community

3.Customer Community Plus

4.Partner Community/Channel Account


External Apps License :


For the External Apps licenses, it was designed for light B2C or business-to-consumer usage scenarios. 

It is typically seen as a sort of commerce portal. 

Most importantly, it offers minimal access to Salesforce data and allows access to just a few standard objects.


2.Customer Community License


The Customer Community license is also used for business-to-consumer solutions, but it offers access to more Salesforce objects 

than the External Apps license does. These objects include data like cases and events, along with data related to customer service 

such as the work order and service appointment.


3.Customer Community Plus License


For customer-based sites that focus around driving sales, the Customer Community Plus license is available. This license offers the same sharing benefits as the full internal license does.


4.Partner Community License


Good for B2B scenarios where you need access to leads,opportunities and campaigns.


Channel Account :


Salesforce offers a similar license known as the Channel Account license.

In terms of features and access, it is essentially the same as the Partner Community one.

It is just packaged a little differently.

This one is perfect for companies with sites that need to calculate usage based on the number of partners and not individual use.


Guest User Profile :


Special kind of profile access for unaunthenticated users.


Each site is assigned a guest user for unauthenticated access.


"secure guest user record access' setting :


Having this enabled means that org-wide defaults for guest users will always be private for all objects, regardless of what external org-wide settings

have been enabled for this org. And this is a good thing since it helps tighten security for any public-facing sites that can be accessed by 

unauthenticated users. This means that in order for guest users to have access to data they need, they will need to 

specifically create sharing rules for these guest users.

 

Customer Community Login vs Customer Community :


The "Customer Community Login" licenses are for logins-per-month pricing, and the "Customer Community" licenses are for named-user licensing.


High Volume Customer Portal license :


These are intended for sites with thousands or millions of users, but they are limited in what access they provide.


External Sharing Options :

1.Sharing Sets

2.Share Groups

3.Account Role Hierarchy

4.External Role Hierarchy

5.Account Relationships

6.Super Users


When it comes to associating these options with specific licenses, sharing sets and share groups are available to users 

with the Customer Community license since this license is not role-based.


keep in mind that the default level set for external access cannot be more permissive than the access for internal users.


Sharing Sets and Share Groups :


Sharing Sets:


Grants access to records associated with :

->Account or contact records that match the user's account or contact.

-> works with access mappings

Use Profiles and not roles to grant access.


Sharing sets work by granting access to records via something known as an access mapping.


Access Mappings :


Assume that there is a need to grant external users access to the case object. 

When setting up a sharing set, you will need to specify whether the ID associated with an account matches the related account ID for that case 

or whether this match comes from the ID associated with the contact. You will then need to specify what access level should be allowed, 

and this can be either public read-only or public read/write.


Share Groups:


Share groups work together with sharing sets to grant access to internal users for any records owned by external users that are part of the sharing set. 

But there can only be one share group associated with a sharing set.


Account Role Hierarchies :


Account role hierarchies to facilitate sharing with external users.


Default partner roles :

 

The first is a partner user, followed by a partner manager, and partner executive. 

Whenever the first external user is enabled for a partner account, an account role hierarchy is created.

If the number of roles for the site remains at three, then this will include a partner user at the bottom of the hierarchy, 

and above that will be the partner manager and partner executive at the very top.


Account Relationships:


Used to connect two partner accounts.

Configured with clicks and not code.

Configure data to be shared through rules.


To take advantage of this feature, it will first need to be enabled in the org, and once it is, it cannot be disabled.


External Account Hierarchies :


External account hierarchies is a feature that was introduced as a beta feature in the Summer '20 release and has now since moved to general availability.

It works very similar to the role hierarchy that is used for internal users since it opens up access vertically to users with higher roles. 

It is just that this feature is used for external users assigned a role-based license, and here is the thing that separates it from account relationships.

It does not require setting up sharing rules.


Note :

Just like account relationships, this is a feature not enabled by default. But once it is turned on in an org, it cannot be disabled.

If it is enabled, a new external account hierarchy object will be created.


Note :

Use of the more advanced sharing features may come with the advantage of wider access for external users, but there are always tradeoffs to consider for that wider access.


Super Users :


Super users are what was once known as portal super users.

Super users allow site users to view the records of other users that are either at the same level or below them in the hierarchy. 

This feature is available to users assigned the Customer Community Plus or Partner Community licenses only.


it must first be enabled by toggling it on in digital settings.

As long as super user access has been enabled in the org and each of these partner managers has been enabled as a partner super user, 

they should have access to each other's records.

This feature is limited to certain objects such as opportunities, leads, cases, and custom objects. 


Summary :

1.Which external user sharing feature available is dependent on license-either role based or non-role based.

2.External User sharing features :

->Sharing sets and share groups

These two features are only available with the customer community license, which is non role-based. 

This means the feature must work with a profile to open up access. 

It also means they can perform well for simple sites with thousands or millions of users. 

These are known as high-volume sites.

-> Account role hierarchy

 Beyond this are the advanced sharing features which are role-based. It begins with the account role hierarchy.

-> Account Relationships and sharing rules

 If a more granular level of access is needed, there are account relationships, which work together with sharing rules.

-> External Role Hierarchy

For sites that do not want to work with sharing rules, there is a new feature known as the external role hierarchy.

->Super Users 

They will be able to share records with users that are at the same level or below them in the hierarchy. 


Note :

1.A site user's access begins with the user's baseline record access along with the external org-wide defaults set for the org.

2.These external org-wide defaults can never be more permissive than the equivalent internal setting, 

which is logical since you would never want an external user to have more access than an internal one. 

3.It is considered a best practice to limit the number of customer and partner account roles allowed in an org.

4.it is important to remember that by using features that allow more granular sharing, well, this results in a trade off where there will also be more limits along with possible site performance problems.


Guest User Profile :


Restricted access for unaunthenticated users.

-> Greatly enhanced in summer 20, etc

-> Use sharing rules to grant access.

-> Guest users cannot own new records.

Records should be assigned to default owner.


Sharing Sets :

-> Grants access to records that match the user's account or contact record.

-> Use profiles and not roles to grant access.

-> Only one sharing set per object and profile.