Skip to main content
SAP Tuning

SAP Performance 101: Improving ABAP Code

A few days ago I received the SAP Community Voice Newsletter in my email. I recommend you signing in SAP newsletters since they have really interesting articles about different topics. Anyway I found in the newsletter a really interesting article about improving ABAP code performance that I totally recommend to read:

How To Improve Your ABAP Code Performance

I want to give you a perfect example of how a bad ABAP code can completely destroy the performance of a program.

The problem

End-user complains about the overall performance of the system. As usual after checking with the user the performance issue is located on a few Z programs that he uses almost everyday.

As usual the system’s information is the following:

  • SAP ERP 6.00 EHP8.
  • MS SQL Server 2012 running on its own VM.
  • Windows Server 2012 R2.
  • Kernel 749 Patch 413.
  • Central Instance + Dialog Instance. Both running on individual VMs.

The investigation

First of all I executed the transaction and I notice a high execution times in all the executions. Just to be clear a high execution time not always means a performance issue. As I explained before the perception of the user related to the execution time cannot be taken as an indicator for a performance issue on a SAP system. Users are human beings (most of the time) and nowadays humans are impatient…

Let’s start doing a performance trace of the program using ST12 transaction. I copied part of the data in the screenshot:

ST12 trace of Z program

What is interesting here is that almost the whole execution time goes in the ABAP side. There is almost no database time which is usually strange. Of the calls side we can see that 2 reads to tables are consuming 66% and 33% of the total time. In this case it is clear that we have to take a look to both READ TABLE since there is something strange on them…

Checking the ABAP code

So I checked the ABAP code and I found the following READ TABLE on the code. The first thing that I saw was the SELECT INTO TABLE:

Seeing this we already know that the SELECT * is not the best way to perform a SELECT. As the article stated the SELECT should be limited to the data we want/need because a SELECT * is inefficient. Anyway the performance issue is not in this SELECT since there was no execution time registered in the database side. I kept checking the code and I found a SORT for the gt_alis_imputac table:

So this part of code should improve the performance of the program for the following READ TABLE done using this table. The READ table are the following ones:

So basically what happens here is that since the gt_alis_imputac table is defined as standard table then the read statements perform a linear seach til the end of the table. It really doesn’t matter if the person who created the program wrote a SORT before the READ TABLE, it just keep searching through the whole table.

Even worst, during the LOOP where the READ TABLE are performed the code modifies the gt_alis_imputac adding new records using an APPEND. This will cause that the sort order is destroyed after the first insert and impacting in the performance.

Doing things the right way

One easy action that we can do to improve this code is to use a BINARY SEARCH in each READ. The READ TABLE code would be something similar to this:

Even better, if we declare the gt_alis_imputac as a SORTED TABLE we don’t need to perform a SORT of the table and we can skip the BINARY SEARCH. The READ statement automatically perform a binary search for sorted tables. The only problem will be using the same fields in the WITH KEY condition that the components of the table gt_alis_imputac.

Binary search is  faster and more efficient than linear search. If we don’t write BINARY SEARCH then we are using the linear search which search one by one record from the table.  For small tables is a good idea use a linear search since the performance won’t be impacted. But when we have big tables like this case the best practice is to use a BINARY SEARCH. This way what would happen is that the table will be divided in two equal parts. The search will check each part and decide in which of the part the element will be. Again, it will divide that part in two parts and do the same. This image explains it perfectly:

Binary Search
Binary Search Vs Linear Search

Notice the number of steps between a binary search and a linear search in a sorted table. About the adding new registers in a SORTED TABLE, if you use INSERT instead APPEND then the new registers will be added in the correct position.

Performance after modifying the code

I performed a new ST12 trace after improving the code:

ST12 trace of Z program after improving code

Several things to remark here:

  • The total execution time went from 2.947 seconds to 2,7 seconds as result of the changes we performed.
  • The top calls right now are the 2 SELECT * that I mentioned before. We can improve the execution time using specific fields instead * in the SELECT.
  • The READ TABLE went from 66% and 33% of the total execution time to 8,7% and 7,9%. Most noteworthy the execution time for those calls went from 1.900 and 1.002 seconds to 0,2 seconds.

Conclussions

I think you get the idea of how a bad coding can impact greatly on the performance of the program. In this case the solution was really easy to implement and it didn’t’t take longer than 5 minutes.

Best coding practices and recommendations are created for a reason and it is quite common to ignore them. If you are a programmer please spend time improving your code. I know it is hard and you probably had better things to do but I promise it is completely worth it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.