As a software job it lacked most of what professionals would consider "minimum standards" these days; The product had to be functional at all times, and there was no such thing as "release management". We were working on a live production system, at all times. The version control system was called "making a backup", and involved keeping around enough backup copies of the database image, on floppy disk, which included both the data and the code. There were no code reviews, there were no unit tests, there was no bug tracking database, and there were no development team meetings. For simpler single developer projects in a small non-networked application for DOS, it worked fine. But what happened between 1984 and 1994 is that the software development world fell apart, over and over again, and each time, something new was supposed to save it. Object oriented programming will save us. Blobbygrams (OOA/OOD) will save us. Metaprogramming will save us. Patterns will save us. Formal methodologies will save us. Each of these software tools has its place, or had a place at some time, but nothing has ever been a panacea.
Why do we have so much procedure and process now, and so many tools that we never needed or didn't know we needed, back in the 1980s? Because we can't just fly by the seat of our pants. At a certain level of complexity, ad hoc approaches stop working at all, and lead to almost certain project failure.
Software projects often fail, even when there is a formal process, and the reason we most often pinpoint is that projects are "out of control", and that even though many or most people on the project know that the process is out of control, they can't agree on how to bring it back under control.
I love version control systems, because they are time-machines for code. And they are a part of keeping a software process under control, and knowing what code went to what customer, in what version of your project. And they're great. You should never work without one. One of the other things that version control was supposed to do was prevent things from changing that we want to keep frozen. Some version control systems even require you to "check out files" before you can work on them, and that "check out" action changes the files from "read only" to editable. (Visual Source Safe, and Perforce, are the two most commonly encountered version control systems that require you to first get a read only copy of the whole file set, then execute a "check out" command to make the individual source files writable before you can edit a file.) Part of the reason for that "read only" flag was that in the early days version control systems lacked real merge capabilities, or that merging was difficult, perhaps considered "risky" or "scary". Most projects that I have worked on, try to achieve such a "frozen" state or 'stable branch' for all major projects. Stability is part of a project being under control.
The other half of a project being under control, paradoxically, seems to be that everybody wants to keep cramming features into the product. This schizophrenia (stable, yet with new features) seems to be the proximal cause of projects going out of control, in my experience. Rapid uncontrolled progress on a project leads to one kind of diagnosis of project failure (it's unstable, and unusable) and yet, that rarely happens anymore at most places, projects are seen to be out of control for the reverse reason; Nobody can explain or justify how slow the progress on new features is going.
The most successful software projects I have ever worked on, and all of my favorite jobs, have had one thing in common; the projects that I worked on were "under control", that is, bad stuff was minimized, but also, expectations of developer productivity were reasonably in sync with what was realistically possible.
My best-ever bosses have all been people who knew what a PID loop was, and most of them could even tune one if they were asked to. A PID loop has at least one sensor-input that reads something from the real world, like temperature, or RPM, or air pressure, or perhaps a virtual input such as a price of a stock on the NYSE. It then has an output, which is something which can hopefully be used to affect the thing we're trying to control. If the input was a temperature sensor, measuring the temperature of a liquid, the output might be a relay on/off control attached to a heater, or it might be a variable output controlling a valve, which can change the pressure or flow of a gas or liquid, or perhaps the output might be the commands to a stock-market-buying-and-selling system. What a PID loop does is take three coefficient terms in an equation, a Proportional term, an Integral Term, and a Derivative term, and use those coefficients to do realtime control of a process. When a process is "under control" it behaves in a predictable way, even when it's disturbed. If the sun came in the window, and heated up the liquid that we're trying to control, a PID controller would handle that disturbance, if tuned properly and the process would not be out of control.
Software processes are not as simple to control as "one single input", but they do respond to logical analysis, and this logical analysis is conducted at a glacial pace. Once or twice within 20 years, someone comes up with something like the "SDLC" or "Waterfall" or "Scrum" or "Agile" approach to software development. These are promoted as a software panacea. Inevitably, certifications and formal processes take over informal insights into project management, and turn whatever good ideas were at the core of these software development "control" practices, and take all the effectiveness, and certainly, all of the fun, out of being a software professional. It's particularly sad to see "Agile" and "Scrum" get twisted, since the original ideal behind "Scrum" was exactly the insight that software processes are not universally equal and that practices that work in one context might not be workable in other contexts. So, while "Scrum" should have been resistant to such perverse misuse, It has been widely noted that what killed Waterfall could kill Agile, and scrum.
So given all that, you'd think that I would argue that developer should just be left alone to do what they do, and take as long as they're going to take, and all that. That would be a spoiled, unreasonable, and ultimately self-destructive viewpoint. The best projects I have worked with, and the best managers I have ever worked for, did not give developers enough autonomy that they could derail a business plan, and imperil a company's future. That would have been rediculous. But what they did do, was figure out what sorts of controls, and measurements of the software process were effective, and apply at least those methods and measurements that could be shown to be useful. They were agile without using the word agile. They didn't have code reviews. They didn't have scrums. But they had something which is perhaps the foundational principle behind Scrum:
Managers, stakeholders, and developers, co-operated, and worked together. Developers were respected, but not allowed to run the show. Managers were technically competent, and understood business requirements, and could ascertain whether or not developers were effective, and were making sufficient headway. Nobody got together for daily standup meetings and said "I'm blocked" or "No blockers", as if that would help. But when a developer needed a tool, he would go to his boss, and he'd get questions, intelligent ones about whether it was needed or not, and if the need seemed real, he would get his tool bought. When a developer was not making good progress, the approach was to see what could be done, pragmatically, to get something done, at a reasonable time, even if it wasn't the full spec that everybody would have dreamed off. That kind of rational change of scope, and attempt to protect project milestones, was as effective (whether we call it timeboxing, or sprints, or milestones) as it could have been, given that what really often held us back, and delayed projects, was the same thing that always delays projects; Inexact, incomplete, incoherent, mutually contradictory or vague requirements, due to a lack of real understanding of the underlying business requirements, or a misunderstanding about the real useful nature of the software product.
Pragmatism should be the primary value in every development organization,and on every project. Pragmatism stays focused on business. The business of writing software. It doesn't go down blind alleys, it doesn't play blame games, and it doesn't wonder about water that's gone under the bridge, but it sticks to the question; What do we do now, what do we do next, and how do we prevent issues that have affected our ability to do great work from hurting us all again? Pragmatism takes collective responsibility for success, and doesn't blame individuals. It doesn't play political games, and it doesn't stab people in the back. Pragmatic professional software development is not a buzzword, or an approach that replaces human judgement. In fact it relies on human judgement, and only works when you're sane, sensible, and centered in reality. It's just recognizing that there's a lot of superstition and magical thinking out there in the software development world, that needs to be replaced with careful, rational, friendly, collegial, scientific realism.
In developing a software, you really need to know the basics and history of it because it is by doing so that you get to know how software really works...
ReplyDeleteliquor software