Object-oriented programming was a very powerful idea, evidenced by the fact that it rapidly and completely overran the prior generation's structured programming as the preferred conceptual framework. It gets its power from two things:
- Encapsulation. The idea of a software object maps very well to our understanding of objects in general. A microwave oven, for example, has a state that can be read from its LED display, and methods that can change that state in terms of buttons that can be pressed. Designing software systems this way makes their interface simpler and easier for engineers to understand. It also isolates problems, so that bugs in the microwave don't affect the refrigerator.
- Inheritance. Just as goats have many of the same features as quadrupeds and mammals, so software objects can derive much of their functionality from older and well-debugged ancestors. While they may differ in details, well-designed base classes allow specific sub-classes to sport rich functionality with very little actual coding.
These are inherent in using object-oriented approaches. In other words, unless a language has both encapsulation and inheritance it's not object-oriented. But there's a third thing. It doesn't have a name, as far as I know, and it's inconsistent among the various examples of object-oriented languages. Objective-C handles it reasonably well, while in C++ it's a disaster.
I'll call it anonymity. This is the idea that an algorithm or object can act on other objects without having to know up-front what they are. The concept of anonymity is essential to both democracy and the free market, and for much the same reason it's important in software too. If I'm going to perform a service I don't need to know my client's life history; I just need to know enough to do what I have been hired to do.
Inheritance can take us pretty far. If an object inherits from objects of the type able to use an interface, then it can use that interface too. It doesn't reveal its ultimate identity, but it reveals it's ancestry. This is often good enough and can solve a lot of problems. Except one.
That problem is allocation, which in C++ looks like this:
new is a reserved C++ keyword that must be followed by a class name. This is the only way (as far as I know!) to initiate the chain of actions required to allocate a new instance of a given object type. Invoking new allocates memory sufficient for the object and then calls a series of constructors that initialize the object from base class(es) upwards. But because C++ is a strongly-typed language, the T above is lexical -- it can only be resolved during compilation. It's not possible for me to call new on a type only known at runtime.
While there may be anonymous objects in C++, there are no anonymous types. And that is a bit of an issue, the same way a brick wall built across a freeway is a bit of an issue.
The C++ solution -- and you can see that they dimly recognized that there was a problem by designing a half-assed solution -- is templates. Templates simply allow entire functions and classes to be nothing more than macros to be invoked later when the actual types are finally known. This approach is so complex that few C++ compilers supported it at first, and even in modern compilers it still causes strange and unexpected issues.
And why? There are several things that templates are used for, but I would argue that the one that is most insurmountable is the new operation.
Objective-C solves this problem with the concept of the meta-class. This is a run-time object -- just like other objects -- that represents the class itself. new is just a method on this class. It's possible to write a function that can allocate any object based on a run-time input. This is not possible in C++. As result the anonymity afforded by templates remains reserved for small-scale classes -- things like lists and queues -- but never anything large. To do that always involves abstractions sophisticated enough to resolve the dilemma caused by new despite the limitations of the C++ language.
This is something that C++ should have handled internally, whatever the complexity, instead of presenting it as a roadblock to the programmer.
UPDATE: part 4