Sunday, 14 May 2023

Best Practices For Large Data Volumes in Salesforce

 Your Org has tens of thousands of users, tens of millions of records, or hundreds of gigabytes of total record storage, have to follow best practices. Few Considerations:


Skinny Tables

Recommended to make your query fast. Important for batches taking long time to execute.


Indexes

Use index fields in where clause of Query. Multiple indexed field in query make it more faster. Even use Two-Column Custom Indexes. Two-column index is more efficient than single indexes on same fields.


Divisions

Partitioning the data of large deployments to reduce the number of records returned by queries and reports. For example, a deployment with many customer records might create divisions called US, EMEA, and APAC to separate the customers into smaller groups that are likely to have few interrelationships.


Mashups

One approach to reducing the amount of data in Salesforce is to maintain large data sets in a different application, and then make that application available to Salesforce as needed.


Deleting Data

Delete unused or unnecessary data. Use Bulk API's hard delete option, which allows records to bypass the Recycle Bin and become immediately available for deletion.


Search

When large volumes of data are added or changed, the search system must index that information before it becomes available for search by all users, and this process might take a long time.


Indexing with Nulls

Idea to reduce Null values, replace with NA or blank. Either use formula field to set value and index formula field.


API Performance

Add addition filter in rest API's query, so they don’t scan all salesforce data.


Sort Optimisation on a Query

Add a limit or sort order in query to make it fast and scan a limit of data only.

No comments:

Post a Comment