Is Object Oriented Programming worth it in 2023?
What is OOP, really, and what has it done to our code? In retrospect, is OOP a good thing? Is it all good? A little bit bad maybe? What are the true pros and cons? How did OOP play out over the last few decades? Adventure awaits! Huzzah!
One. Object Oriented.
The language is object oriented. Allow me to explain.
How we invented OOP Part 1: ML, Assembly and BASIC
Back in the day, we had BASIC. CBM BASIC V2 on the Commodore 64 was great, but we got by on the VIC20 just fine. The C128 had more stuff that I initially thought was useless — like a sprite editor. I learned to appreciate these new features long after my C128 died. But the point is, we got by with what we had to get by with. So, we learned a very important rule: Always define and comment your variables at the start of the program so there is no confusion over where things go.
1 2 3 |
10 $A = 10 : REM NUMBER OF LOOPS 20 $P = 2 : REM NUMBER OF PLAYERS 30 FOR T = 1 TO $A : PRINT "HELLO WORLD" : NEXT |
Now, in a more complex program this led us to write code as follows:
1 2 3 4 5 6 7 |
10 $SA = 10 : REM SHIP SIZE 20 $SB = 15 : REM SHIP MASS 30 $SC = 100 : REM SHIP'S MAX SPEED 40 $SD = 4 : REM MAX NUMBER OF MISSILES 50 $SE = 100 : REM MAX SHIELDS 60 $SD = 50 : REM MAX ROCKET FUEL 70 REM ETC |
As you may begin to see, we got funneled — shaped — into a certain way of coding based on many factors. One could say it was a way of remolding how we thought based on how the computers worked. This really came about not from BASIC but from ML and assembly language programming; in those languages, especially on a C64 or other with very limited memory, you address variables as bytes in memory and they are one after each other, carefully pidgeonholed and kept track of by the user. If you didn’t record the address of your variables and what they were for, they existed only as pre-existing memory spaces and there would be no way of knowing what they were for. The practice of naming variables took off much later, when we got to C, although many BASIC programmers began naming things in an organized way when they could, to combat this problem of variable organization.
Part 2: C and C++
Usually the first real programming language us BASIC fogies were exposed to was C and hopefully soon after, C++. With C we had this wonderful ability to name functions and variables with whatever kind of long names we wanted and we didn’t even need pesky dollar signs in front of variables. We also gained the ability to treat functions as variables via pointers, which made sense, as this was a feature from ML/Assembly Language that was not put in BASIC but which was put in C.
Another feature of C was the introduction of STRUCT. Watch carefully how we would define the variables of a hypothetical space ship in C:
1 2 3 4 5 6 7 8 |
#DEFINE MAXBUF 128 struct SpaceShip { char shipName[MAXBUF]; int mass; int fuel; int size; int shields; }; |
This form of organization allowed us to do exactly what we did before, but enforced organization and access via the language.
Now we could do things like struct SpaceShip s1; and we had to access the data like s1.fuel = 10; and life was grand. We had what amounted to namespaces, we had what amounted to objects. You could even put functions in a struct:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
struct SpaceShip { int (*drawShip)(int); }; int someFunction(int x, int y) { printf("someFunction was called with coordinates %d, %d\n", x, y); return x + y; } int main() { struct SpaceShip s; a.drawShip = someFunction; int result = s.drawShip(42, 1); printf("The result is %d\n", result); return 0; } |
So, basically, you can create little groups of code and variables which must be accessed under a namespace. Another obvious and great improvement in all of this was the ability to put the code in separate files.
All of this is nothing really new in the sense that you could have — and should have — organized your code this way in ML/ASM/BASIC (or just plain C without structs). The benefit of structs — or Classes in C++ — is that the structure of the language supports and encourages you to think like this. But note that you don’t have to program this way. You could write a C program like you write a BASIC program, if you wanted to — carefully laying out the variables and the functions, in one monolithic file. But you could also take advantage of the structure and organization enabled by the language itself.
Overuse: Java, PHP, Javascript and Others
I know there are other languages, like Lisp and Python, but the point is that people began to over-use the OOP model. Everything became an object, and things got messy. For example if you are writing a roguelike or tile-based game, lets say that map tiles, monsters, items, the player, etc. are all related to a single class called “object”. The object has an “inventory” which can link to other objects. Like a linked list (nodes — tree structure, or so). For a map tile, this would include whatever is on the tile. For a monster or player, it could include an inventory.
This kind of building upon other objects, called inheritance, can make a lot of sense. But it also causes extreme amounts of confusion if overused. For example, a “mob” — or so-called “moving object” (i.e. a mobile, meaning a monster in the game) does not really need to share a codebase with a map tile (or room). I have encountered far too many problems with this model to casually ignore. However, the basic idea is sound. The issue is that too much freedom was given to humans to mold the computer to their way of thinking and not mold the programmer to the computer. So the problem is that the computer can’t always do what the human wants. It only does exactly what you tell it to do.
Off the deep end: Haskell, and so forth
These kind of programming languages go even further, that they are “reactionary”. I’m kind of breaching a different problem here, so let me just limit it to their idea of encapsulating ideas into units which are essentially objects, and then the program gets re-written somehow by the interpreter as a series of objects instantiating in a linked list or tree-like structure. It’s kind of like RPL — It doesn’t make sense from a logical perspective and computers don’t think that way anyways. No, saying “I am hungry…. not” is not how humans think. So why force people to do unnatural things and then try to trick computers into jumping through all the hoops necessary to make them work with that data? All in the name of “computer science”. Well, friend, they have gone too far. Computers will never operate in this way; computers are and always will be fundamentally imperative, because they interpret data over time. If you want to write computer programs you have to be aware enough about what you are doing that you can actually solve problems that arise. You cannot dictate syntax that will avoid programmers not understanding the basic nature of the hardware. You can limit them, but if you limit them too much it creates a toy language. Such as BASIC, that does not have pointers.
Some problems can be solved by a good library, like how MeekroDB solves many SQL attacks in PHP. But you have to be aware of what the issues are surrounding what you are doing or no amount of syntactic sugar will protect you. This is why SWIFT sucks so bad. They tried, but they failed, and they failed mainly because they did not learn their lesson from failed languages like Haskell and Rust. Web Assembly is another disaster. Who is the marketing genius that modeled the language after LISP? Well, it’s par for the course, Javascript is not multithreaded. Which is not the real problem, but the clue to understanding the next problem we will discuss. But for now…
What we learned from trying to implement OOP
It is valid and useful to have classes. Python’s foolish idea to have self. in the constructor however is clearly a kludge. So we look to the Java model for class. Each class has to be in it’s own file, generally speaking. And no header files. Header files, frankly, are useless. As for a preprocessor, you can have one if you really want, but if you use one, you are really just shifting the blame, kicking the can down the road. Some amount of can kicking is helpful, of course. Go ahead and admit it, there is a use for GOTO as well. Go ahead, I don’t care, I really don’t. I agree. But the point is, how OOP is facilitated. Do we need pointers? Yes. That’s because no human can ever forsee every kind of data structure we need to use. That’s why STL is only half-successful. But we do need standard types, like string, like lists (Python. PHP), and so forth (maps, vectors, arrays, sets, all and more!). The point is facilitation without over-drubbing these concepts into something you absolutely have to use. That is because OOP is a human abstraction layered onto the language. It does not actually exist. You do not need OOP. You can write OOP in flat file imperative BASIC if you want to, merely by organizing the code. But — but — having language features which facilitate you doing this is something, we have learned, saves time, and helps us code. Think carefully about “saves time” — Most coders do not use assembly language as a main language, and I would venture to say no coder uses assembly (or C, or any language) without the use of external libraries. Code reuse and saving time is the major impetus to using OOP. We learned we need it because it saves us time. But we do not always need it. It is possible that ill-designed OOP gets in our way, or allows us to trip over our own two feet, and this can be bad.
Conclusion: Principle 1.
If you want to write C code that is essentially BASIC with functions, and implement everything yourself, you have to be able to do this without the programming language getting in the way and forcing you to do something. Examples of languages that utterly fail this principle would be Swift. A language which succeeds would be Java or C++. You do not have to write classes. Actually, anything with try/catch is borderline pass. Try/catch is good, the ability to throw is good. Facilitate this like in Java or better and it’s probably good, or at least ok. One problem with using them in blocks, or SWIFT’s ridiculous unwrapping, is that it creates a separate namespace. This was surely unintentional, and was a failure on so many levels. It was probably done because they copied the implementation of a language like C and tried to make changes without truly understanding why things were done that way originally.
Garbage collection is along these lines. You must allow it to be turned off, and also to be triggered upon demand. A keyword that removes a variable from garbage collection is enough, as is a system call that performs the collection. Alternately create a mode where the programmer specifies that certain variables are to be collected. Also, keywords like volatile are so important. Java and C++ do a good job here.
This also speaks to multithreaded programming: allow us to do it and get out of the way. fork() is fine. Anything lighter than that is fine. Superlight would be “re-entrant code”. In general, the Java model wins here; I suspect C++ is a strong A+ here too.
Are you noticing a pattern? Essentially, C, C++ and Java are the models for a successful language; in that it has to be able to work like C, but there also have to be extensions or facilitations like “Class” and “import”. Famously, you do NOT need header files, and you do NOT need to have things declared before they are used. Swift, Python, etc. forcing you to declare a function before you use it is a painful “feature” that amounts to laziness on the part of the interpreter. If it can find a function in a class or in a different file, and there is a circular reference there, then why can’t you do the same thing in a flat file of functions? The answer is “Computer Science” but the rationale is flawed and these are failed languages in that regard. Toy languages. Like Javascript.
Speaking of Javascript, we will example the major flaws of the Toy Language known as Javascript, how they relate to modern game engine design, mobile device programming (ex. Android, iPhone) and so forth — and why “they” should have known better and how trivial their mistakes would be to fix, and how EPL can and will fix these problems immediately.
Principle 1. If I want to write imperative code which is essentially BASIC or Assembly or C, the language has to get out of my way to do that. I don’t want to be forced to do anything. But, I want the facilitation necessary to allow me to do things like this. If this means mixing COBOL, FORTRAN and LISP into some kind of blob, so be it. But the language should make sense. Fortunately or unfortunately C is the gold standard of the “right direction” after ML/ASM/BASIC, and C++/JAVA show the next steps, in general, from there. Other languages like Ruby, Swift, Go, Rust, Haskell, are just failed attempts at saying “Look at me, I’m different” and hoping it would catch on. None of them really add anything to the space which a programmer needs over and above what C++ and Java already added. In some cases, merely removing certain restruictions which were nonsensically placed on the language is the solution; such as the case with Swift; in others, key design mistakes need to be repaired, such as local vs global class variables in Python.
EPL has all of the good stuff described above and none of the bad stuff. That’s principle 1 and that’s why it’s so good. The question is, will EPL be better than C, C++ or Java? The answer is yes. It will be more like Swift, Python, Haskell, Rust, Go, LISP and Ruby. But without all the crazy crappy baggage those systems introduce.
More soon.
Filed under: Uncategorized - @ March 31, 2023 8:10 am