Sales Force Architect · follow
Sales Force Architect
Published on · 8 minutes to read · September 9, 2020
8 minutes to read
September 9, 2020
One of the first lessons architects learn when using Salesforce is the impact of throttling limits on performance. Salesforce's multi-tenant architecture offers great flexibility and scalability, but its shared nature requires us all to be good neighbors. Depending on how we integrate with Salesforce, different technologies can be used to ensure we provide the scale we need.
As the demand on your API increases, the risk of reliability issues increases, which can have an impact on the business. It is important to understand the needs of the business for all data requests to ensure that requests are responded to in a timely manner. "Timely" can have different meanings, as some business processes may require near real-time data, while others may have higher latencies. Once the data needs of the business are clearly understood, different strategies can be implemented to meet those needs.
This post explores issues to consider when you're building a system that makes a lot of API calls to read data from the Salesforce platform. Because estimates differ slightly between high-volume reads and high-volume writes, high-volume writes will be explored separately in a future post.
Problems inevitably arise when technology cannot keep up with business data needs. This typically manifests as a slowdown, with users experiencing long delays while retrieving data. For example, in a customer service center, poor data performance can lead to longer call handling times. Beyond that, however, even the quality of the data suffers. A data synchronization process that is not fast enough can lead to incorrect or even corrupted data on the distributed system.
As these problems increase, trust in the system decreases, which can affect the entire business.
The resulting confusion and miscommunication can undermine trust with business users and customers. Growing frustration can lead to lost customers, low employee morale and increased turnover.
To build trust, you need to address scalability issues that affect platform performance. Before you can address these issues, you must first fully understand the business requirements. This understanding will enable you to make informed decisions when there are trade-offs required, sacrificing performance in one area to improve performance in another.
When building an API integration with Salesforce, the most obvious starting point is the native Salesforce REST and SOAP API. Both APIs support bulk reads, but are buffer bound. These limits change over time, so be sure to read the latestdeveloper documentation.For the API, the two main constraints areConcurrent API requestsyesTotal number of API requests. You can work around these limitations with design changes, but these tradeoffs may cause other limitations to become a factor.
For example, you can use a custom Apex REST API to combine one request instead of making multiple separate API requests. Requests for several different related data items can be written as one request. This trade-off reduces the risk of hitting the overall API request limit, but increases the risk of other limits, such as concurrent API request limits, Apex CPU time limits, and storage size limits. Vertex dynamics.
Here is a simple example. Three API calls are made in sequence against Salesforce's native REST API. if you start hitting your limittotal number of API requests,With this integration, you can redesign it to use custom REST API calls.
heget full accountAPI calls marshal data from Salesforce before returning it to the client. This reduces three API calls to one, which helps avoid limitations on the total number of API requests. However, you can expect this call to take longer to execute, which may compromiseLimit on concurrent API requests.
You can also useComposite APICombine multiple related requests into one call. This approach simplifies how you structure your calls and reduces the risk of exceeding the limit on the total number of API requests. Starting with the Winter '21 release, you can now useComposite Graph APIPacks a complex sequence of subrequests into a single call, allowing you to process up to 500 subrequests in a single payload, knowing that if any part of the operation on a given graph fails, the associated transaction will fail. Run all the way back.
The flexibility to balance between the constraints of different regulators allows for greater scale, but only to a point. Even very short transactions can generate if you get enough API trafficConcurrent API requestslimit problem.To achieve greater scale, you need to use a different architecture, leverage other aspects of the Salesforce platform, or expand beyond the platform.
Salesforce provides aStream Event SchemaIt provides a different approach to handling large amounts of data. Data can be sent from Salesforce to other systems instead of requesting synced data from Salesforce. They provide PushTopic, Change Data Capture (CDC), Platform and General Eventsslightly differentData transfer capability. The event type you choose will depend on your specific use case, but the general architectural patterns are similar between them.
For example,Modified Data Capture (CDC)Events provide a way to notify external systems when data changes in Salesforce. Because CDC events are fundamentally asynchronous, there is no guarantee that any given change will be immediately available to external systems. However, by entering data into external systems and using those systems to manage data requests, you can reduce the need to read large amounts of data directly from Salesforce.
In this example, customer records are imported into Salesforce. When imported, a CDC event is generated to which subscribers can react. In this case, the external subscriber receives the latest copy of the event and inserts the event into the external database.
This approach allows other systems to query account data in external databases without directly accessing Salesforce.
This scalability advantage comes with some complexities. Messaging isn't always guaranteed, so in rare cases, changes in Salesforce can get lost, causing synchronization issues with external systems. It is important to have a process in place to coordinate any data synchronization issues that may arise over time. Even with a coordinating process, streaming events does not provide the same guarantees as modern transactional approaches. However, if your business needs can be met while considering these constraints, platform events can provide huge scalability benefits.
As part of the Salesforce platform, Heroku is ideal for handling high volumes of API requests, often for scalability. For example, a common pattern is to useHerokuConnectEnables synchronization between Salesforce and Heroku Postgres.
Here, a large number of client systems can access Heroku to retrieve data that is kept in sync with Salesforce. As customers scale, the number of Postgres databases and the number of Heroku dynos can scale to meet growing demand. Meanwhile, Salesforce's needs are unaffected.
As with platform events, this scalability introduces some additional complexity. External clients will need to query the Postgres database or connect via a custom API implemented in the Heroku web dynos. Heroku Connect has powerful management tools, but you should consider how sandboxed updates work with your integration testing environment. Also, as with platform incidents, there is a risk that data may existout of syncIn rare cases.
Security is another consideration, as the ownership structure in Salesforce does not transfer to Heroku. If you have complex security requirements, you may face additional complexity in your Heroku implementation.
For many companies, this approach offers the best of both worlds. Salesforce provides a flexible infrastructure with the consistent scale that businesses require day-to-day, and Heroku complements this with the ability to scale to meet the demands of high-volume processes that would be constrained when run directly on Salesforce. .
Mueller softwareAny point on the platform(andrainbow) supports a highly scalable integration with an architecture similar to the Heroku architecture just described. Cloud-based workers can satisfy incoming read requests and scale dynamically as needed. Additionally, the Anypoint platform tools simplify much of the work of setting up these integrations.
For example, to support high-volume reads, MuleSoft Anypoint can act as an API gateway for Salesforce data. By itself, this doesn't solve the read-heavy problem, but the Anypoint platform cancache capacityreturn. Depending on the nature of your data, this caching feature can significantly reduce the need for Salesforce and minimize custom code.
Salesforce provides great scalability out of the box, but as a shared system, governor limitations always create a performance ceiling. For read-heavy business needs, you may want to consider architectures that include Heroku, MuleSoft, or external systems updated through Salesforce streaming events. In future posts, we'll explore the various integration patterns that Salesforce customers are using to address these scaling challenges.
Steve Stearns is a Regional Director Success Architect with more than 10 years at Salesforce and 20+ years of experience implementing scalable integrations. This post is a collaborative effort of a Scalable Architecture team within the Salesforce Customer Success team. Cast includes Manish Agarwal, Tia Harington, Samuel Holloway, Tushar Jadhav, Ravikumar Parasuram, Maha Rama Krishnan, Ramesh Rangeya, Suchin Ringan, Paul Rose and Mukul Singh.