As I was working with the latest release of a software solution (an operating system), I began to have issues that were some pretty obvious defects. As I pondered why it was possible that I was having problems that could have been caught during some level of quality assurance, I started thinking about the uniqueness of software development today. Then, I recalled the last interview I was on. I was peppered with questions about my experience with development. Not so much of what my experience was, rather, how recent it had been. I finally looked at the interviewer and said, “Look, I’ve been in an executive role for the last ten years. In my current job, I report to the CEO. In my prior job, I was two levels below the CEO. Neither one of them wanted me spending my time coding. In fact, I remember the CEO at my last company saying that if I was doing the tasks that my team members should be doing, he was paying me too much.” This individual’s questioning stopped a short time thereafter. He got it, but it didn’t change the underlying rationale for his questioning.
I’ve been involved with development of applications for the better part of three decades. My first paid consulting engagement, in fact, was to write an application for a legal referral service. I created a networked application to track inbound callers, and automatically assign an attorney based on the caller’s legal needs. The application not only needed to track all of the information for the caller, it was also necessary to assign attorneys in a way that attorneys were not given too few (or too many) leads compared to other qualified attorneys. It was further complicated, as there was also a requirement that it prioritize attorneys based on their historical spend. It was pretty leading edge for a number of reasons. (For anyone in the Los Angeles area, by the way, their business is still going strong–800 THE LAW2.)
First, the platform was all based on PCs. PCs were new at that point. The choices were an IBM compatible machine (IBM, Compaq, or “clones”), or an Apple product (Apple II or the newly released Macintosh). Second, the systems were all linked to a single server–that is, they were networked to use a single database. (Not literally one database…a combination of databases that were centralized.) As commonplace as this became five to ten years later, it was really leading edge at the time.
It presented quite a number of challenges for me as the developer. Oh, and I was also the guy who designed and built the network. So, I was busy–and I didn’t get to blame the networking/systems teams if my application did something wrong (since I was the networking/system teams). Building applications was no picnic. It was necessary to build in a substantial amount of logic and code that anticipated all of the “wrong” things a user might try. Empty fields that shouldn’t be empty, other fields that should only contain numbers…etc. It wasn’t easy. As a one-man show, I was the designer, the coder, the tester, and the documentation lead. Load testing consisted of me going into their site after hours (which was a bit of a misnomer, as they had REALLY late hours), and running mini-applications I built to simulate multiple users doing multiple things.
Further complicating this effort was the fact that it was a multi-user system. So, in addition to all of the bad things a user might do just based on their own interactions, the application now had to have “awareness” of what other users were doing that could impact data. So, if two people were looking at the same record, and one user decided to edit a record, something had to be done to prevent the other user from editing the record. Most important, something had to be done to keep the other user from wasting her/his time by changing information that couldn’t ultimately be saved.
I’d arrived at multiple iterations of this solution. The brute force method (version one) was to prevent anyone from viewing the record if there was someone else viewing the record. The logic was relatively sound (not perfect): if someone was updating a record, it would have meant that the other user was looking at stale/incorrect information, so I should prevent that. Subsequent versions allowed viewing with a notification it was being edited elsewhere and record locking (you can look, but you should know you might be looking at bad data–oh, and you cannot touch), and ultimately field level locking (you can change anything here, but I’m not going to let you change the phone number, as another person is already doing that task). This was all pretty solid–except when the workstation doing the editing/viewing locked, or the server went down. Then I had to build something else to “force unlock” that record; databases didn’t yet have networking awareness built in.
Newly minted developers today would look at this and say, “So what.” And, this is what I am going to discuss a bit in this post.
We had no choice but to think, and build the wheel from scratch!
I worked with a friend a couple years after my example above. He wrote a program that was always resident in the memory of a computer. In those days, memory was a precious commodity. His program (technically known as a “TSR” [terminate, stay resident]) was a utility that performed a number of functions at a user’s request. It was a “graphical” application (using characters–this was well before the wide use of a graphical interface…a la Mac), and could copy files, undelete files, and a number of other things. I was his tester…or what we termed the “beta tester.” A term that has survived time in the industry.
His application was written in Assembly Language, which was one level above “machine language.” Without getting into the technical differences between the two, I’ll merely say that it required a lot of thought and planning before actually coding the solution. Although I was fluent in Assembly Language, I chose a different language (C) to code the application I created for the legal referral service. In terms of efficiency and speed, the order of preference would have been machine language, assembly language, C, and BASIC. (There were others, but these were pretty standard on a PC platform at the time–Pascal was also pretty common.) There are technical reasons, of course, as to why the order of the languages was as noted. Essentially, machine and assembly languages were the “purest” languages. C was a compiled language, and therefore had some overhead. BASIC (at the time) was an interpreted language, so it was the least efficient.
As developers, we faced the reality that we were among the many who were trailblazers in this field. Consequently, we had to really think through our designs and implementation well before we started to write code.
As noted, memory was scarce. Storage was even more scarce. The majority of non-business computer users relied on floppy disks (best case), and some on cassette tapes (worst case). Don’t get me wrong, hard disks existed. But a 10 megabyte hard disk cost thousands of dollars. It wasn’t affordable for most individuals. In fact, most businesses didn’t have them. Additionally, as with any group of people that are doing new things, there wasn’t anything available in the way of “best practices,” hints, or even quick reference guides. The Internet wasn’t available to mere mortals, so looking something up wasn’t as easy as performing a “Google search.” We attended local clubs, which were clusters of like-minded developers who–like us–were learning at the school of hard knocks. These clubs didn’t meet daily, and in fact were usually a monthly gathering. So, unless there were several in an area (which there weren’t–even in Los Angeles), we had to use our own talents to build our applications.
It goes without saying that most of us did things the wrong way for a fair amount of time. Schools were teaching COBOL, and maybe FORTRAN. Perhaps some engineering programs did teach Assembly and C, but I wasn’t in one of those programs. Most of us weren’t. My first attempts at creating an application brought a usable product, but it was a significant number of hours. There was the usual problem of finding things that just didn’t work correctly (bugs/defects), but there were also performance enhancements that I wanted to make before unleashing the product on my customer.
It didn’t take me long to understand that doing things right the first time meant fewer hours in front of the computer. There were times that I’d spend hours trying to track down one bug…only to find it was something really dumb. An assignment of a value instead of a check against the value. A variable with the wrong capitalization (value is not the same thing as Value), etc. When these happen a few times, it gets really old–and frustrating. So, I learned to plan everything out well beforehand, as noted above. I also learned to use some level of standardization of my code, such as variable naming conventions, etc.
As my coding skills increased, so did my knowledge. There were the obvious things to avoid, like spaghetti code. This term refers to a technique where there are explicit commands in the code that tell it what to execute next. Instead of a logical flow that did things in the order needed (the right way), spaghetti code would jump from one place to another. Not only was this level of coding hard to debug (look for and resolve issues), it had a negative impact on the speed and efficiency of the application. Bluntly, it was messy. (And not as tasty as spaghetti!)
One of my first breakthroughs came about six months into my application. At that time, I learned a technique that allowed me to use repeatable code throughout my application. So, when I needed to repeat something in my code (perform a calculation, for example). I could merely “call” a function that did this for me, instead of having to type the block of code each time. Think of this like the “chorus” section when looking at lyrics for a song. Instead of illustrating the lyrics in the chorus, most lyrics will just say “(Chorus),” at which point the reader (singer!) repeats what is already documented in the chorus.
Using functions not only saved me time (when the function is perfected, it’s done for all intents and purposes), it made my code more efficient. Beyond functions, I started to write libraries, which were essentially blocks of code that could be used across applications (or modules). This saved a lot of work, as I didn’t have to reinvent the wheel with each new application (or new module) I wrote.
There are a lot of other things I learned throughout the years, particularly when it came to working with databases. The problem with databases is that they will let a coder do almost anything they want. But, the performance can vary widely if things are done poorly.
Coding Today–a Contrast
As I pondered my interview with the individual noted above, I asked myself what it was that could have changed so much through the years that would have netted my prior skills as “outdated” or “not very useful.” And my answer was, not a heck of a lot. In fact, I submit that coders today have it far better than when I began to code. I say this with some level of hesitation, as I don’t want to feel like an old codger. But, it’s relevant, so I’ll continue.
Through the years, I submit that core coding techniques/rules haven’t changed substantially. Newer languages have certainly changed the way one writes code specific for that language, however. So, looking at the maturity of C into C++ (and for some C#), programmers moved from a procedural language to an object oriented language (or the paradigm of OOP–object oriented programming). For sure, this added complexity in the way code was written, and it forced programmers to think about programs as collections of objects (through data modeling) that ultimately accomplished a set of goals. Contrasting that, a procedural language (like C) was just that–a set of commands that accomplished a set of goals. Do you see something similar here? “Accomplished a set of goals.” Yep, it doesn’t matter what language one is using to create an application, at the end of the day, the goal is to create a solution to a problem (or set of problems). Period. Time hasn’t changed that. So, the “what” hasn’t changed, but the “how” has changed at a granular level.
Now, I don’t want to downplay the importance that was behind the question I was asked in the interview. His question wasn’t meant to drive a question of whether or not I could code, his question was to ascertain whether or not I could help others with their questions regarding coding. Wrapping one’s head around the concept of OOP is, in itself, not easy. There are concepts within OOP that tackle things such as inheritance, objects (of course!), classes, polymorphism, subtyping, etc. The planning I mentioned earlier is done in hyper detail with OOP. If I were seeking someone strong in any modern language (Java, Objective-C/Swift [if you want to write any application for an iPhone/iPad, you’d better know these], Ruby, etc.), I would be hard pressed to ask someone whose only programming experience was with C. Bottom line, it’s not only right that the interviewer asked the question, he’d have been crazy not to ask the question.
Thanks for the tutorial, but what does this have to do with this blog entry?!
No, I don’t have a minimum number of words to meet here, and I don’t get paid by the word. Well, actually, I don’t get paid at all. But, hang in there. One cannot build a stable house without a foundation, impatient reader!
Okay. I believe that several decades of programming have not really done anything more than show a natural evolution to where we are today. People who were programming on mainframes would probably agree that, for the most part, the techniques themselves haven’t changed substantially. There just hasn’t been what I would call a “revolution,” that is, something that was so spectacular, it made everything else in the past look outdated and–well–archaic. When push comes to shove, programmers still “revert” back to lower-level control when their set of tools shelter them too much from things they want to do.
Today is easier. And harder.
Looking back at the woes of my early days, I’ll be the first to say that I’m envious of the tools that are available to programmers today. When I first started programming, my “development environment” consisted of a simple text editor. My eyes were the “syntax checker,” and when they failed, I generally found out by something going wrong with my code. As I noted, it was a lot of long nights. Today, developers have integrated development environments (IDEs) in which they write their code. Not only do these environments help with the syntax, they do a heck of a lot more–such as intelligently completing code that needs to be written. (For example, adding a closing brace on a block of code–this one bit me quite a bit in the old days, as well.) So, if they do something wrong, modern IDEs catch the problem at the point it’s introduced. Further, they have build automation built in…as opposed to “tools” in the past that required a lot of scripts–which had to be created from scratch.
In addition to IDEs, developers enjoy an unlimited library of help. If, for example, I wanted to find out the most efficient way to build a query to accomplish a specific task, I’d probably just do a search using one of the many search engines on the Internet. (Ironically, and I’m not making this up, I’ve seen developers do this instead of talking to one of their cohorts a couple of cubes away in the same building!) There are numerous other tools and methods that have taken programming into a much different realm…a much simpler realm.
But, it’s a double-edged sword. As I noted earlier, when I was using a procedural language, one of my personal evolutions was to build my own functions, and libraries. As coding progressed, almost every language started including its own libraries of functions. Here is where things started to increase the complexity for programmers. For me, if I wanted my application to do something, I built it. Now, the “somethings” are pre-built. But there is a huge level of overhead with that. Why?
Programmers these days not only have to have the core skills I had when I started, they have to memorize a huge number of things that are unique to: the language they’re using, the database they wish to access, the operating environment in which it will be running, and even the IDE they’ll be leveraging. Though it’s possible to get something done without this level of memorization, the underlying application will probably be very inefficient (best case), or fail to run at all when the underlying environments change. For example, let’s say I decided to write my own code to retrieve a bunch of values out of a database. But, that particular database has a built in “function” that can be utilized to do the same things. It’s all good, right? Not really.
If my code utilizes functionality that is eventually discarded in future “upgrades” of that database (security patch, newer version, etc.), my code will cease to work. Or, worse, it can actually cause some bigger problems down the road. If I’d leveraged a method that was built-in to the language I was using, the worst thing that would happen is that the vendor of that language would have release notes saying that the method will cease to work, and instead an alternate method would be necessary. (And they usually go into detail on this.)
Both Apple (with the iDevice software development kit [SDK]), and Google (with the Android SDK) do this frequently. They’re not alone, however. Oracle and Microsoft do it with their databases. Microsoft with its operating system (ever hear of Windows?). Etc. It’s a common practice, and it will never stop.
So, why evolution instead of revolution?
I say evolution, because the changes to coding over the years was generally a response to things that were noted as difficult in the past. So, each subsequent version or language was created to solve something that wasn’t available in the past.
A lack of repeatable code brought built in functions and libraries. Unique attributes in operating environments (operating systems, databases, etc.) brought functions that were specific to each of those. Looking at problems at a much higher level (and at a “collection” level) brought OOP. A lot of lost productivity due to having to catch bugs later in the process, or compiling code, brought IDEs. All of these are evolutions, in my mind.
So, what would be a revolution? In my mind, we’re in the transition of that right now. When programs start to write themselves, given some parameters, based on something they’ve detected as “wrong” or “needed,” that would be a revolution. Ironically, the closest I’ve seen to this is malware/viruses that change their characteristics to avoid detection. And, using that as an example, you can probably guess that my view of a “revolution” is when coding behaves more like organic organisms, rather than inorganic objects.
Leave a Comment
You must be logged in to post a comment.