Topic : Why C++ Sucks
Author : buzzard
Page : << Previous 4  
Go to page :


early 1990's.)

Moreover, those lists are far too short, as they don't call attention to the bewildering variety of problems introduced by function name overloading.

At least if all overloaded functions with the same name have different numbers of parameters, the result of the call is unambiguous. A grep for the name will turn up a number of matches, and if the line declaring the function is longer than a single line, some additional effort may need to be expended to figure out just which one. Annoying, but not impossible.

Far worse is the use of multiple names with the same number of parameters. You have to figure out the (compile-time) type of every parameter, exactly, before you can make the right call about which function is called. Go look up through the code to determine the type of any variable used; check in the header file to see what type is returned by this function; try to remember whether * means a dot product or a cross-product of two vectors.

Ok. Now you've got the types.

Go read the definition for how the "best" match for an overloaded function is resolved. I'll still be here. Go ahead.

Set intersection. I don't know about you, but I don't normally do much set intersection when I write function calls.

Ok, let's be fair. You can state it unambigously in English without reference to set intersection: the 'winning' function must have all its parameters "type match" at least as well as all the other candidates, and one of its parameters must "type match" better. (Set aside the rules for "type matching", and the inclusion of user-defined type conversions in them. This rant is already way too long.)

It's easy, in fact, to see how the specified rules underscore human intuition about best match. At least, each rule in isolation does so. I have my doubts about the combination.

Still, I find it a bit uncomfortable. I worry about the compiler's intuition not matching mine. I'd be more comfortable if the compiler only picked out a particular function for me if it was unambiguous; say, because every parameter was a better match for the "winner".

Problem is, that would preclude having, say, all the matching functions sharing, say, a common first element that is the same type. Such functions would always match equally. It's easy to see why C++ uses the rule it does.

The above considerations were based on a programmer who was trying to intentionally leverage function name overloading. What about one who isn't?

Suppose in C I define a function "foobar" in one module, and a define another one with the same name in another module, but with different argument types. In draconian fashion, C will produce a linker error, and force me to rename one or the other.

Is this so bad?

Consider the alternative found in C++: these two functions may be totally unrelated, but through a commonness of the English language (e.g. the same word having two different meanings; consider simply the word 'heap' in the sense of a semi-ordered data structure versus a pool of memory) share an identical name. In C++, name-mangling means those two functions can happily live within the same namespace, and within the same project.

Is this a problem?

What happens if I'm calling foobar() somewhere in my code, and then someone introduces a new #include in my code which now brings the other foobar() into scope? What if I was relying on some automatic type conversions in my call to foobar(), and the new foobar() now matches "better"?

And think about this: is it good that the different functions could come via different semantic mechanisms? So if I grep for "foobar", thinking it is coming from one sort of place, I may miss that a "better match" is being introduced through a different compile-time indirection?

And think about this: is it good that I can add "default arguments" to functions declarations, thus messing up my attempt to cull out possible function calls based on the argument counts not matching?

What a freaking pile.


Page : << Previous 4