Now what happens when you take those 16 subprograms and move them into separate processes on the same computer?
Now what happens when you take those 16 subprograms and move them to separate computers all around the world?
Now what happens when you take those 16 subprograms, all over the world, and every time these subprograms need to pass information to one another you do the following:
- Print out the data to be passed
- A person picks up the document off the printer
- The person figures out which subprogram this information is destined for.
- The person sends the document via interoffice mail to the office that has the right server, running the target subprogram.
- At the target location, a person takes the message, and types it into the server program by hand.
- The person waits for a response.
- If there is a response, the person at the target location prints out the document.
- The person send the response document back to the correct site.
- A person takes the return message and types it in by hand and waits for another message.
In the last case you have a system that can perform less than 16 operations per day. Over time it will probably be more like less than 1 per day due to contention issues and wait states.
Now, lets take the model above that no Computer Scientist would contest as being an extreme failure, and apply it to business.
What happens to a business process when you divide up all of the key functions into 16 tasks and hire an expert that can complete all 16 tasks - where all communication and problem solving occur between neurons in the experts’ head?
Now what happens when you take those 16 tasks and move them to different, low cost workers all located in the same shared space?
Now what happens when you take those 16 tasks and move them to low cost workers based around the world?
Now what happens when you take those 16 tasks, performed by low cost workers based all around the world and between each person you add change control, governance and approval steps?
In the first case, you have a system that can perform complex problem solving and task execution in minutes.
In the last case, you have a system that can perform a many tasks at the same time, but because of contention issues, wait states and communication millions of times slower than neuron to neuron communication, processes which used to take a few minutes can take many months or even years now. This is the model that most companies are selecting as optimal for their IT organizations.
I am constantly amazed that IT staff understand so little of computer science that they cannot understand that the current hyper-specialization and sourcing models not only are significantly more expensive when you look at all costs, they are literally millions of times less efficient and productive. It is as if our competitors are designing our organizations. There is no better model for high expense and catastrophic failure than the current model.
Most companies could replace hundreds of IT staff and departments that are over specialized with a handful of experts who were allowed to control all aspects of their role.