For instance, humans have a gene to have hair all over our bodies like chimps, yet this gene is turned off towards the end of pregnancy. If a designer didn't want us to have hair all over our bodies like a chimp, why give us a gene for it and then turn it off? Just don't include the gene.
In all of those instances there is some manifest intent behind the left over 'junk'. In some of those instances, there is some cost/benefit analysis that drives the decision to leave the junk remaining (that is, in order to remove the junk, there would be a non-zero detrimental cost associated with it).
In the case of a shared library, it is because the designer of the library
intended to have those additional functions available, regardless of whether or not the designer of a particular application utilizing some of that library intends to use it. If there is deprecated and
unreachable code that still exists within the binary of the library, that is a mistake
on the library designer's part. The case of an interpreted language is the same.
In the case of hardware, that is typically more of a cost/benefit analysis scenario. If there were no cost
associated with having custom revisions of PCBs for multiple product lines, then a designer would prefer
to do that - to leave off things like open pads for configuration zero-ohm jumpers, DNP
components like additional, different regulators, over-redundant protection circuits, various connectors, etc...all things that add a negligible yet non-zero increase in risk and points of failure. If were a choice between making the PCB out of FR-4 or another material that had dielectric properties perfectly suited
for my application, and there is no cost difference
and no difference in risk
between the two, I'm going to pick the latter. Frankly, you could extend this justification to software as well - if I want to convert a UNIX timestamp into a human-readable ASCII string, I can either utilize a library that may or may not exist on the target systems, and potentially have a slightly different implementation than I am expecting, or I can write my own function to do such a thing, knowing exactly
how it is implemented, knowing for certain
that the function will be available for my application at runtime and do exactly what I expect it to do
. In resource-constrained land
, the appropriate design decision is to not spend the time writing/debugging a UNIX timestamp conversion function and to go ahead and accept the (rather small) risks associated with the use of the shared library that is outside of your control. Because, similar to the hardware situation, the cost associated with eliminating the (rather small) risk is greater
than the benefit.
This kind of thing happens all the time. It's is a natural consequence of design reuse and standardized components.
It is a natural consequence of having limited resources
to design reuse and standardized components being the best general approach to many, many design
. Really, the only benefit to design reuse is reduction in resources and time necessary for accomplishing a task. Which is a pretty awesome benefit for fundamentally limited designers. I do not see the benefit of design reuse for an entity that expends zero resources