Saturday, 7 September 2019

Platform cache in salesforce


Platform Cache can increase performance,replace Custom settings
and Custom Meta data Types by storing data temporarily without
any need of extra storage,and eliminates the need to use web callouts.


Note : session cache doesn't support synchronous Apex.
For example, you can't use future methods or batch Apex with
session cache.

Platform cache to improve the API callouts.


ex : access token for external service

Cache.OrgPartition orgPart = Cache.Org.getPartition('local.SomePartitionName');
orgPart.put('IntegrationAccessToken','123w12e12wzws1jnds3rbhh3');
orgPart.put('IntegrationTokenExpiry','10/03/2018');
system.debug((string)orgPart.get('IntegrationAccessToken'));


1.Org cache

Org cache can store Org-wide data such that anyone in the org can access
it.Org cache is accessible across sessions,requests, and org users and profiles.
For Org cache,data can live upto 48 hours in the cache.
By default, the time-to-live (TTL) value for org cache is 24 hours.


global values for all users
exists for upto 48 hours

2.Session Cache
specific to user sessions
Expires in 8 hours or if session ends

enterprise - 10MB (Mega bytes)
unlimited and performance - 30 MB (Mega bytes)
All others -0MB (mega bytes)

Maximum partition size - 5 MB (mega bytes)

Use Case :
============
1. Want to get top 10 by annual Revenue
2. Want to get number of VIP Accounts in each region (7 regions)

ex :

List<Account> accounts = [select Name,Region__c,BillingAddress,Description,
   Industry,Status__c,Type,Opportunity_count__c,AnnualRevenue
   from Account order by AnnualRevenue DESC NULLS LAST LIMIT 10];

   // Add to Cache
   Cache.Org.put('local.AcctPartition.TopAccounts',accounts);
 
   // Get the cache Partition
 
   cache.OrgPartition orgPart = cache.Org.getpartition('local.AcctPartition);
 
   // Get the data and cast it to the right datatype
 
   List<Account> accuntsCache = (List<Account>) orgPart.get('TopAccounts);
 
   if(accuntsCache !=null){
     return accountsCache;
   }
    return accountscache;

ex :

Map<String,Integer> accountsByRegion = new Map<String,Integer>();
List<Schema.PicklistEntry> regions = Account.Region__c.getDescribe().getPicklistValues();
for(Schema.PicklistEntry pe : regions){
  accountsByRegion.put(pe.value,0);
}

for(String regionName : accountsByRegion.keyset()){
   Integer count =[ select count() from account where Region__c =: regionName and Type ='VIP'];
   accountsByRegion.put(regionName,count);
  }
   // put any data structure into the cache
 
   Cache.Org.put('local.AcctPartition.VIPAccounts',accountsByRegion);
 
   Map<String,Integer> accountsByRegion = (Map<String,Integer>)cache.Org.getPartition('local.AcctPartition').get('VIPAccounts');
 
   if(accountsByRegion !=null){
      return convertMapToDetail(accountsByRegion);
   }
 
 return getAccountsByRegion();

 Note :
 The Cache Diagnostics user permission allows you to see detailed
 information about the platform Cache feature.

 Session cache :
 ================
 Session cache stores data that are tied to a user's session
 such that other users in the Org cannot access this data.
 The maximum life of a session cache is 8 hours.

 use the Cache.Session and cache.SessionPartition classes
 to access values stored in the session cache.

 Cache.Session.put(Key,value);


 CacheBuilder Interface :
 ========================
 public class Accountcache implements Cache.CacheBuilder{
   Public Object doLoad(String topTen){
 
   List<Account> accounts = (List<Accounts>)[select Id,AnnualRevenue,Name,Region__c
    FROM Account ORDER BY AnnualRevenue DESC NULLS LAST LIMIT 10];
  return accounts;

   
   }

 }

 CacheBuilder Interface has method with 1 parameter.
 CacheBuilder use the class and key to request.

 //populate the cache
 List<Account> myAccounts = (List<Account>) Cache.Org.get(AccountCache.class,'TopTen');

 // Retrieve from cache
 List<Account> myAccounts2 = (List<Account>) Cache.Org.get(AccountCache.class,'TopTen');

 Interface checks if value is cached.

 If cached return value else calculate,cache and return.

 Note :
 Instead of storing and retrieving cache, it is better to provide
 loading strategy toPlatform cache, so upon cache miss,Salesforce
 automatically calls the class to load the cache for that key.
 This reduces the code and handles cache miss much more gracefully.

Monday, 2 September 2019

Handling MIXED_DML_OPERATION Exception in Salesforce

you can easily run into this error if you are trying to perform DML on setup and non-setup objects in the  same transaction.

Non-Setup objects are standard objects like Account or any custom object.

Setup objects are Group1,GroupMember,QueueSObject,User2,UserRole, UserTerritory,Territory, etc..

ex :
you cannot insert an account and then insert a user or a group members in a single transaction.

1. Avoid MIXED_DML_OPERATION using system.runAs in test classes.

ex :

@isTest
static  void test_mixed_dmlbug() { 
    User u;
    Account a;     
    User thisUser = [ select Id from User where Id = :UserInfo.getUserId() ];
    System.runAs ( thisUser ) {
        Profile p = [select id from profile where name='(some profile)'];
        UserRole r = [Select id from userrole where name='(some role)'];
        u = new User(alias = 'standt', email='standarduser@testorg.com',
            emailencodingkey='UTF-8', lastname='Testing',
            languagelocalekey='en_US',
            localesidkey='en_US', profileid = p.Id, userroleid = r.Id,
            timezonesidkey='America/Los_Angeles',
            username='standarduser@testorg.com');
        a = new Account(Firstname='Terry', Lastname='Testperson');
        insert a;
    }
    System.runAs(u) {
        a.PersonEmail = 'test@madeupaddress.com';
        update a;
    }

}

2. Avoid MIXED_DML_OPERATION Exception by using Future Method.

ex : 

trigger Automatecontact on Account(after insert) {
     List<contact> lc = new List<contact>();

for (Account acc : Trigger.new) {
   lc.add( new contact(lastname ='dk',accountId =acc.id) );
}
insert lc;

UtilClass.userInsertWithRole('dineshd@outlook.com', 'Dinesh','dineshd@outlook.com', 'Dineshdk');

}
public class UtilClass {
  @future
  public static void userInsertWithRole(String uname, String al, String em, String lname)
   {
Profile p = [SELECT Id FROM Profile WHERE Name='Standard User'];
UserRole r = [SELECT Id FROM UserRole WHERE Name='COO'];
// Create new user with a non-null user role ID
User u = new User(alias = al, email=em,
emailencodingkey='UTF-8', lastname=lname,
languagelocalekey='en_US',
localesidkey='en_US', profileid = p.Id, userroleid = r.Id,
timezonesidkey='America/Los_Angeles',
username=uname);
insert u;
  }
 }



Note :

System.RunAs(User)

1.The system method runAs enables you to write test methods that change the user context to an existing user or a new user.

2.The original system context is started again after all runAs test methods complete.

Advantage of Trigger Framework in Salesforce

According to trigger framework
1) we should create single trigger for each object.
2) One handler class which will call Action
3) Create one action class with business logic same function you can use for other activity also. You can call from VF page  or batch job if required.

1) One Trigger Per Object
A single Apex Trigger is all you need for one particular object. If you develop multiple Triggers for a single object, you have no way of controlling the order of execution if those Triggers can run in the same contexts

2) Logic-less Triggers
If you write methods in your Triggers, those can’t be exposed for test purposes. You also can’t expose logic to be re-used anywhere else in your org.

3) Context-Specific Handler Methods
Create context-specific handler methods in Trigger handlers

4) Bulkify your Code
Bulkifying Apex code refers to the concept of making sure the code properly handles more than one record at a time.

5) Avoid SOQL Queries or DML statements inside FOR Loops
An individual Apex request gets a maximum of 100 SOQL queries before exceeding that governor limit. So if this trigger is invoked by a batch of more than 100 Account records, the governor limit will throw a runtime exception

6) Using Collections, Streamlining Queries, and Efficient For Loops
It is important to use Apex Collections to efficiently query data and store the data in memory. A combination of using collections and streamlining SOQL queries can substantially help writing efficient Apex code and avoid governor limits

7) Querying Large Data Sets
The total number of records that can be returned by SOQL queries in a request is 50,000. If returning a large set of queries causes you to exceed your heap limit, then a SOQL query for loop must be used instead. It can process multiple batches of records through the use of internal calls to query and queryMore

8) Use @future Appropriately
It is critical to write your Apex code to efficiently handle bulk or many records at a time. This is also true for asynchronous Apex methods (those annotated with the @future keyword). The differences between synchronous and asynchronous Apex can be found

9) Avoid Hardcoding IDs
When deploying Apex code between sandbox and production environments, or installing Force.com AppExchange packages, it is essential to avoid hardcoding IDs in the Apex code. By doing so, if the record IDs change between environments, the logic can dynamically identify the proper data to operate against and not fail


Custom Iterator (Iterable) in Batch Apex

1.If you use an iterable the governor limit for the total number of records retrieved by soql queries is still enforced.

2.if your code accesses external objects and is used in batch Apex, use iterable<sobject> instead of Database.QueryLocator.

global class CustomIterable implements Iterator<Contact>{

  List<Contact> con {get;set;}
   Integer i {get;set;}
 
   public CustomIterable(){
      con = [select Id,LastName From Contact LIMIT 5];
  i=0;
   }
     // This is Iterator interface hasNext() method, it will
// return true if the list 'con' contains records else it
// will return false;

   global boolean hasNext(){
      if(i>=con.size()){
    return false;
  }else{
    return true;
  }
   }
 
   // This is Iterator interface next() method, it will keep on
   // returning next element on the list until integer i reaches 5
   // and 5 in if loop is the size of the list returned by soql query
   // in above code
 
   global Contact next(){
     if(i==5){return null;}
i++;
return con[i-1];
   }
 
}

Note :
If your code accesses external objects and used in batch Apex, use Iterable<sObject> instead of Database.QueryLocator.

In Batch Apex , the start method return a Database.QueryLocator ,but you can return an Iterable.

global class batchClass implements Database.batchable<Contact>{
 global Iterable<Contact> start(Database.batchableContext info){
   return new CustomIterable();
 }
 global void execute(Database.batchableContext info,List<Contact> scope){
    List<Contact> conToUpdate = new List<Contact>();
for (Contact c :scope){
   c.LastName='Test123';
   conToUpdate.add(c);
}
update conToUpdate;
 }
 global void finish(Database.batchableContext info){

 }
}

Note :
1. Use the Database.QueryLocator object when you are using a simple query to generate the scope of objects used in the batch job. In this case, the SOQL data row limit will be bypassed.

2. Use iterable object when you have complex criteria to process the records.

External ID in Salesforce

The External ID field allows you to store unique record IDs from an external system,typically for integration purposes.

If we create External Id field, it will be indexed by default by salesforce.

During upsert operation

1. If External Ids are matched, it will update the records.
2. If External Ids are not matched, it will create a new record.
3. If External Ids are matched more than once,it will throw an error.

The fields with below data types can only be external Id

1.Number
2.Text
3.Email

You can designate up to 25 External ID fields per object.

External Ids are set with the unique property so that the IDs will be unique to each roecord.

Note :
Unique fields are not used in the UPSERT . it determine the uniqueness.

Indirect Lookup Relationship vs External Lookup Relationship

Types of relationships in salesforce :
======================================
1.Master - detail relationship
2. Lookup relationship
3. self- relationship
4. External lookup relationship
5. Indirect lookup relationship
6. Many-to-many relationship (junction object)
7. Hierarchical relationship

Indirect lookup relationship :
=====================
Indirect lookup relationship links a child external object to a parent standard or custom object.

you select a custom unique, external ID field on the parent object to match against the child's indirect lookup relationship field,whose values are determined by the specified External Column Name.

In Indirect lookup relationship, Salesforce standard or custom object will be the parent and External Object will be the child.

External lookup relationship :
=====================
External lookup relationship links a child standard,custom or external object to a parent external object.

The values of the standard External ID field on the parent external object are matched against the values of the external lookup relationship field.For a child external object, the values of the external lookup relationship field come from the specified External Column Name.

In External lookup relationship, External Object will be Parent.