Monthly Archives: June 2013
Dolphin bugs fixed in May 2013
Again, we have a new bug fix release. Here are the bug fixes that Dolphin users who upgrade can benefit from:
- Bug 304918: Update the ‘Recently Accessed’ and ‘Search For’ sections of the Places Panel when Nepumuk is started or stopped. See git commit 24609e78, review request 110323.
- Bug 255819: Ensure that manual changes of the icon for the ‘Sort By’ toolbar button are remembered. See git commit f12345a5, review request 109966.
- Bug 319660: When dropping the trash in a directory, create a link, rather than copying the trash contents to that directory. See git commit 6072f58c.
- Bug 319747: Fix crash when copying many files. See git commit 1b9726e2.
- Bug 315796: Make it possible to change the parameters of ‘read only’ search queries. See git commit 2c68fe33, review request 110324.
- Bug 309740: Prevent that files in expanded folders are shown with a size of 16 EiB while they are being changed. See git commit 56cf52db, review request 110428.
- Bug 299675: Fix problems when moving files out of or into expanded folders in Details View. See git commit a79c3a39, review request 110401.
- Bug 319336: Fix a crash that could happen when editing the tags of a file. See git commit 2080bc1d.
Thanks to Daniel Faust, David Faure, and Vishesh Handa!
Some Thoughts On Algorithmic Complexity, Part 1
When people develop and maintain software, they usually try to make sure that their code not only works well, but also that it runs fast. There are many different ways to make code perform better. It can be as simple as choosing reasonable compiler options, or as hard as rewriting some code in assembly.
If you see some code that takes long to run, you should not start optimizing by looking into the little details though. The first questions that you should ask is: are we actually using the right algorithm for the job? Are there any design mistakes that make things much worse than they need to be? This is especially important if you don’t know in advance how much input the code will have to handle. It should be obvious that it makes a difference if a list that needs to be sorted contains 1000 or 1 million items. However, how big that difference actually is depends on the choice of the algorithm and its correct implementation.
If you are not interested in mathematics or theoretical computer science, feel free to skip the next sections. Maybe the example programs at the end of the post will be interesting for you nonetheless, at least if you are fairly familiar with programming.
Big O notation
The exact time f(N), measured in milliseconds, CPU cycles, or some other suitable unit, that an algorithm requires if it is given an input of size N (N could be the length of a list or the length of a string, for example) is often less interesting than the growth rate, i.e., how fast that time grows if N becomes larger and larger. Is it constant, or does it increase logarithmically, linearly, or even worse?
To classify the so-called run-time complexity of an algorithm, one often uses the big-O notation. It can be used to provide an upper limit for f(N) in a rather simple way, even if writing down f(N) exactly would yield a very complex expression. The idea is to use a simple function g(N), and say that
if there is a constant c such that
for any N, i.e., if this constant times g(N) is an upper bound for f(N). Simple example: if
we could write f(N)=O(N) because the linear term 4 N is the one which grows fastest, such that the total value f(N) is always smaller than some constant times N.
Typical orders (of growth) that often occur in algorithms in the software that we use include O(1), O(log N), O(N), and O(N log N).
I’ll give some examples for situations where these occur:
constant: a simple example in the Dolphin context would be showing the files and folders on the screen. Even if you have 100,000 items in a directory, only a small, constant number of them can be shown at the same time, so the effort required to draw them should not depend on N at all. | |
logarithmic: the classical example is binary search. If you are given a sorted list of N items, and you have to find a particular item, you might start by looking at the element in the middle. This tells you if you have to look at the first or in the second half of the list. It is easy to show that applying this procedure repeatedly will yield the right item in |
|
linear: a Dolphin example would be loading the data for all files in a directory (like the name, size, and other things that we might want to show in the view). | |
If a good sorting algorithm is used to sort a list of N items, this is the typical number of comparisons of two items that are required, and hence, also a measure of the time that will be needed to sort the list. | |
quadratic: this is something that you really want to avoid, unless you can guarantee that you will never have to handle really many items, or there really is no better solution for your problem. If you are not sure if a better solution exists, you should look for one. Really. |
Best case, average case, and worst case
Things are often a bit more complicated: the run-time complexity of an algorithm can depend on the actual input data. For example, certain sorting algorithms might work better if the input data are (almost) sorted, and worse for certain other kinds of data.
Therefore, one often states the best case, average case, and worst case run-time complexity of an algorithm. Which of these is most important depends strongly on what you know in advance about the input data. If they are always in a certain form, an algorithm that has a good best-case performance just for that kind of data might be preferable. On the other hand, if the input data might come from an untrusted source, ensuring a reasonable worst-case performance should have the highest priority if you want to prevent that you and your users will be the target of algorithmic complexity attacks.
Why am I telling you all this?
This is an obvious question that you might ask now. Everyone who develops software should be familiar with those things already, right? Well, it turns out that a lot of real-world code does have a worst-case (or, sometimes, even an average) run-time complexity that is worse than it needs to be, either because the wrong algorithm has been chosen, or because a tiny mistake in the implementation makes things go terribly wrong if a certain kind of input is fed into the algorithm. This is why I thought it might be a good idea to write a series of blog posts to raise people’s awareness of these issues.
Example
Let’s start with a simple example, and assume that we have to write a function that replaces all occurrences of a certain character in a plain C string by another character. E.g., replace ‘u’ by ‘o’ in the string ‘algurithm’ to obtain the right spelling ‘algorithm’. Let us also ignore that we don’t actually use plain null-terminated C strings much in KDE/Qt code, and that solutions for this problem exist already.
Any algorithm to solve this problem will have to go through all N characters of the string once, and look at and possibly replace each character. So we would expect that the run-time complexity is O(N).
This is a possible solution that we could come up with:
void replace1(char* text, char oldChar, char newChar) { for (unsigned int i = 0; i < strlen(text); ++i) { if (text[i] == oldChar) { text[i] = newChar; } } }
Some will already see the flaw in this implementation, but I assume that many would just verify that this function works nicely for a few examples and then happily use it. It might even work OK in real world applications for some time, that is, until someone uses this function to handle a very long string.
A small benchmark that tests this function for strings with lengths between 100,000 and 800,000 characters can be found in my personal KDE scratch repository. The following plot shows the time in milliseconds as function of the input size (plots were generated with qtestlib-tools).
When you see this plot, you immediately realize that the time depends quadratically on the input size. So our algorithm has a run-time complexity of O(N^2), rather than the expected O(N). This results in an unacceptable run time of 8 seconds just for going through 800,000 characters and replacing some of them! If you are doing something like this in the main thread of an application, the GUI will be completely frozen during that time.
What went wrong?
The problem is the condition ‘i < strlen(text)’ in the for loop. This condition is evaluated after each execution of the loop, i.e., N times. Moreover, strlen(text) must go through the entire string, because it looks for the terminating null character, which defines the end of the string. This means that the O(N) function strlen(const char*) is executed N times. Now it is obvious why our function is O(N^2).
Correcting the mistake
Here is a better version of the function, and benchmark results that show that it needs only linear time. Note that the time to handle 800,000 characters has been reduced from 8 seconds to 1.5 milliseconds:
void replace2(char* text, char oldChar, char newChar) { const unsigned int length = strlen(text); for (unsigned int i = 0; i < length; ++i) { if (text[i] == oldChar) { text[i] = newChar; } } }
What about the compiler?
Maybe some readers will now ask if an optimizing compiler shouldn’t be able to see that calculating the length of the string repeatedly is unnecessary, and thus optimize the first version of our code such that it becomes O(N). It turns out that such an optimization is impossible because using our function to replace any char by 0 will actually change the length of the string inside the loop.
Do professional developers really write code which has such problems?
Yes, they do. According to an article that I once read on the web (I cannot find it at the moment, unfortunately – if you do, please add a comment!), the “strlen in for loop condition” issue once caused huge problems in the computer system of a bank. Moreover, there is still a lot of code out there which has similar problems, even if they are sometimes less obvious. In the next posts in this series, I’ll tell you about some further examples, which will be fixed in KDE 4.11.