Compiler Handled Optimizations

Recently, our class, covered optimizations handled by the compiler.

.

The emphasis, in this case, was that these types of optimizations are not necessary in higher level source code and, therefore, one should prioritize readability and intuitive style in coding. I personally had not realized that the compiler handled so many of these cases and had actually spent quite a while writing them into various programs over the years.

.
While somewhat disheartening, I am not particularly surprised, as a common idiom in programming I have heard is, ‘Don’t try and outsmart the compiler’. Another very common one refers to premature optimization, further discussed here: https://shreevatsa.wordpress.com/2008/05/16/premature-optimization-is-the-root-of-all-evil/

.
I have heard it said before that optimization is not something that should be performed at all until the program is completed in readable, idiomatic code, and has been profiled to determine if/where/how certain areas should be optimized. I agree that readability is probably the most important aspect of a source code but I still find this notion potentially problematic.

.
Much of programming theory, that I have seen, deals with ingraining ‘best practices’, even where certain changes may not be necessary. This can be seen in the context of security in the case of reducing ‘attack surface area’.

.

https://www.sans.edu/cyber-research/security-laboratory/article/did-attack-surface.

.

Basically, the idea stresses best practices to reduce potential security holes whether they actually exist or not.

.
To illustrate, one example would be encapsulation and privacy settings in languages like Java or C++. To be blunt, if a variable is not referenced anywhere except where it should be, there should not be any security holes, and therefore there is no need for it to be private. In the plain C language, these privacy keywords do not even exist. I have heard of similar practices and programming patterns for reducing error surface area also.

.
I would say it is both acceptable, and advisable, to take steps that may or may not be necessary, to reduce attack/error surface area. With that in mind, I wonder whether it is favorable to take steps (that may or may not be necessary) in a sort of ‘optimization surface area’. While more less untested, to my knowledge, I still see potential benefit because the programmer is forced to be me more conscious of the hardware when structuring algorithms and programming patterns. Thus, the programmer cannot simply abstract this all away and hope that the compiler will take care of it.

.
In all liklihood, the compiler will probably take care of it, however, to my understanding, there are often corner cases where this may not be true. Perhaps engraining practices that are ‘closer to the metal’, so to speak, would facilitate better design in general. This could be especially pertinent up through the abstraction layers where the logic may actually make a substantial difference in performance.

.
With all this in mind, I still think including all of these optimizations would be a total disaster, producing a mess of code that is difficult to read and maintain. Nonetheless, I wonder if perhaps some of these optimizations are still useful as a (most often unnecessary) ‘best practice’.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s