eiConsole v.20R1
Documentation
PilotFish eiPlatform Performance Guidelines
When configured and tuned properly, the eiPlatform itself is rarely the cause of poor performance. An instance of the eiPlatform running on a 4 Core 16GB server processes in excess of 1,000 Transactions Per Second (TPS) end-to-end (or from when received by the Listener to when it exits the Transport), for both X12 EDI and HL7 transactions.
That being said, here are some causes that can lead to bad performance:
- An insufficient number of Cores and Memory have been allocated or the eiPlatform is collocated with another application with which it competes for machine resources.
- Processing exceptionally large transactions or batches of transactions that can use up available memory.
- Bad or inefficient XSLT in the transformation stage.
- Suboptimal configuration of the number of threads using New Threading Model (NTM)
- Latency of external applications.
- This could be on the front end, so the transactions are not received in a timely manner
- On the backend where transactions can’t be processed quickly enough by the Target system.
- In the middle of the process where callouts to external systems are performed. This can have an especially big impact where FIFO processing is required on the transactions since a stalled callout for a transaction can cause all of the transactions behind it to back up.
How to attain maximum performance:
- First, allocate a sufficiently large server and for optimum throughput, do not share the system resources with other applications.
- Next, the eiPlatform has built-in, configurable means to optimize throughput.
- The eiPlatform performs data transformation in memory so for large data transformations, sufficient memory should be allocated.
- The eiPlatform supports Forking of large transactions or transaction sets to break up large incoming files into “chunks” reducing memory requirements for large file transformations and improving throughput.
- While the Data Mapper helps generate well-formed and efficient XSLT, it can’t prevent the creation of inefficient XSLT. Here are some tips published on StackOverflow for producing efficient XSLT:
- Keep the source documents small. If necessary, split the document first.
- Keep the XSLT processor (and Java VM) loaded in memory between runs
- If you use the same stylesheet repeatedly, compile it first.
- If you use the same source document repeatedly, keep it in memory.
- If you perform the same transformation repeatedly, don’t. Store the result instead.
- Never validate the same source document more than once.
- Split complex transformations into several steps.
- Avoid repeated use of “//item”.
- Don’t evaluate the same node-set more than once; save it in a variable.
- Avoid <xsl:number> if you can. For example, by using position().
- Use <xsl:key>, for example to solve grouping problems.
- Avoid complex patterns in template rules. Instead, use <xsl:choose> within the rule.
- Be careful when using the preceding[-sibling] or following[-sibling] axes. This often indicates an algorithm with n-squared performance.
- Don’t sort the same node-set more than once. If necessary, save it as a result tree fragment and access it using the node-set() extension function.
- To output the text value of a simple #PCDATA element, use <xsl:value-of> in preference to <xsl:apply-templates>.
PilotFish technical staff have considerable experience and expertise in optimizing XSLT and are always available to provide support.
- Next, the PilotFish New Threading Model can improve performance by configuring thread pools. Each thread pool has several parameters to set including:
- Base Thread Count
- Max Thread Count
- Idle Timeout
- Queue Size and
- Expiration Timeout
For more information on NTM, please review the “Threading Module NTM Manual“.
- Next, while the latency of external systems is outside of the control of the eiPlatform and should be addressed separately by the responsible organization, there are some ways to limit its impact on the overall transaction throughput.
- Set up queues before the eiPlatform Listeners and after the eiPlatform Transports. Queues before the eiPlatform allow large bursts or batches of transactions to be queued up and processed more evenly enabling more efficient use of the eiPlatform instance. Queues after the transport allow the eiPlatform to queue up high volumes of transactions sent to the target system without overwhelming the target or requiring throttling to slow the throughput of the eiPlatform.
- Avoid the use of callouts to third party systems that have high latency, especially when the response from that system is in the critical path of completing the Interface or process orchestration. Make sure that there are timeouts configured with routing to an error file to prevent “clogging the pipe”.
- Back to the first point; allocating a sufficiently large server or servers to handle the volume. While the eiPlatform is extremely efficient at processing high volumes of transactions, that does not mean it can process an infinite number of transactions on a single instance. As the saying goes, “you can’t fit 10 lbs into a 5 lb box”. Sometimes you simply have to deploy bigger servers or more servers to handle the workload. Fortunately, the eiPlatform is both vertically and horizontally scalable. Multiple instances of any size can be deployed to support extremely high volumes, load balancing, high availability and failover.
PilotFish IT staff can assist you with optimizing for performance after gaining a thorough understanding of your use case, architecture and interface flows.