Why is my system slow? Performance and Dynamics NAV

shutterstock_101648272

Over the years, we at iNECTA have been brought in many times to troubleshoot performance. The system being for some reason slow in a business that has high volume. There are many reasons why a system can get slow and in the blog post I wanted to highlight the differences between infrastructure and application bottlenecks.

The first area people blame when the system slows down is infrastructure. Either the machine is slow or the network congested. This is the most visual to the user. There is a box on your desk and another box in a closet that blink with lights. They are connected with a wire, so any one of those three things must be the fault. The solution must be to either improve the components or replace these things. Although this is often the case and great improvements can be made by upgrading infrastructure there is another layer which often gets overlooked, namely the application.

The application layer is not as visual to the user. There is nothing you can physically touch or see. It´s just a mysterious thing that happens when the user interacts with the computer. Most people misperceive the speed of this to be only related to the performance of the actual workstation, server or network. This could not be further from the truth. An application can be badly designed or programmed and can cause much worse performance issues than any hardware could.

Most applications are installed on your computer and there is not much you can do about the way they are designed. You might be able to tune some settings, but usually that is the extent of it. You are at the mercy of what is called the application black box. You send input into the box and it gives you output. How it goes about it, you have little or no control over.

With Dynamics NAV, the consultant has enormous control over the design and programming of the application. The consultant can change the way the system goes about accepting the input and producing the output. This can result in much more dramatic improvements than the infrastructure layer. I would like to outline a very simple example that occurs too often in the programming layer and it might not be as obvious to spot to the unexperienced.

Let´s assume in our example that we have N numbered documents that are scattered in random order on one table in a room. Your job is to find a number of documents, X, and move them to another table across the room. One way to do this is to look for the first document to move, out of the N documents, and then when you find it, walk with it over to the other table and then back. This process would be repeated X times.

We can immediately see that this process can be sped up by first finding all the documents, putting them in a stack and move the stack over to the other table. If walking over to the other table would take one second we have just shaved X-1 seconds of our process. In many cases X could be in the 1000s.

How about that random order of documents? When we are looking for something which is not sorted we have to go through each one, one by one. So imagine looking for a name in a phone book that is not sorted. Let’s say it takes 0.5 second to compare. If you are looking for a document out of 1200, it would take you on average 600 seconds or about 10 minutes. Whereas if it was sorted it would take you about 5 seconds.

So if we put these numbers into formulas to compare. Let’s assume we have 10,000 documents and we have to find 1,000 to move. Let’s also assume it is a computer doing the work (not a very fast one) and it takes us 0.01 second to move and 0.01 second to compare.

In the first scenario our approximation formula would look something like this:

X(0.5 * 0.01 * N) + X * 0.01 = 1000(0.5 * 0.01 * 10000) + 1000 * 0.01

= 50,010 seconds or about 14 hours

In the second scenario our formula would look something like this:

X(Log2(N) * 0.01) + 0.01 = 1000( 0.0997 ) + 0.01

= 99.71 seconds or about 2 minutes

It is evident that the difference is incredible. We have 2 minutes compared to 14 hours. When the numbers of documents is very low, then the difference is not much, but as the system accumulates data, the previous scenario will pretty much render the system useless.

In conclusion, we can see that when you improve your hardware, you might get significant improvements. But very real improvements can be made in the actual application, when the consultant has access to how the data is calculated and manipulated. It does require an experienced person to figure out where the bottlenecks are and how to solve them. We at iNECTA have solved many such problems over the years, transforming a system that people thought was not able to handle the amount of data, to a lean, streamlined machine ready to take on a lot more.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>