Would you agree that, in general, making code more Robust, would make it less efficient?
Hmm... I'm tempted to answer that it totally depend who you are in regard of this code.
For the coder, no. Making the code more robust tend to make it more efficient, because at this level efficiency is both a matter of speed, easy interfacing, and being bug free. Things more easily achievable if you follow the single-responsibility principle.
For your boss, yes, it's less efficient. He would prefer a more generic code, that will be put on a "hey, those are codes that we can reuse" pile and make you works faster next time. For him, efficiency is in the reduction of the delays and costs.
And finally, from the point of view of the project manager, the answer is "both", because for him efficiency regard both the code itself and the time needed to write it. His preference for one or the other will depend if he have been promoted project manager after years of coding, or if he have a beautiful "certified project manager" diploma.
But more globally, I think that anyone with 20 or more years of experience is all too old to theorize about coding. Not because age by itself is a problem, but because the world changed too much during those two decades.
Take me by example, I started with the early 80's demo scene. Language, ASM or nothing, you need to have the smallest possible size and the fastest possible code. I learned to count the ticks, sometimes passing hours to optimize the code in order to gain one or two. This was mandatory at this time where CPU started at 2.5 MHz, and is ridiculous now that they tend to start at 2.5 GHz and are multi-core. It's not totally true, but nowadays, between its multi-core structure and frequency, the worse computer is at least 2000 times faster. And the same can be said for the RAM, that was below the MB at this time and now starts at 4GB.
The level of optimization we had to do at this time, even for basic software, totally lost its reason to exist. This with, obviously, the exception of critical software or critical part of the software ; a realtime 3D engine still need to have the 3D part overly optimized, but it's almost if you can't just use a script language for everything else. And it's what is sometimes used, an engine like Gamebryo, that wasn't the best one but neither the worst, take all its instruction from a dedicated interpreted script language, and still play smoothly games like Fallout New Vegas.
But it's not the only major change. When I started, home computing was at its early age. The father used the computer to plan his budget, the mother tried to see if she could use it to order her cooking recipes, and the children used it to play. Nowadays we have in our pocket a small device that permit us to do so way more ; the mother don't anymore care to order her cooking recipes, in less than one minute she have access to ten variations of the same recipe.
Passing at least a full year working on a software was the norm, now unless you're working on a big office automation system, on an OS or on a AAA game, if you need more that few months, you'll miss your market.
And, will we obviously aren't stupid and are perfectly aware of those change, we have more difficulty to understand what they imply, still theorizing like if we were in the 80's/90's. The whole data centered approach (forgot the name of the paradigm), I both understand it and not understand it. Yes, nowadays everything is data, but those data are proceeded by code, so why this need to radically change the center point ? There's one, I'm sure, I'm just to old, having past too many years focusing on the code, to be able to see/understand it.
And things like Kay's "make it fast" have less meaning nowadays. It's all relative because it's Python, so not only a script language, but also one of the slowest, but for my pleasure (and perhaps some bit of nostalgia) I used Python to write a raster class for Ren'py. Then I started to optimize it's speed. One day for a between 5% and 10% speed increase, that made me gain less than 0.001 second. It's probably possible to gain more, but the gain will still be too ridiculous to effectively worse the pain.
Well, in fact it's not so relative as example. If it's all you can gain with one of the smallest script language, what is the gain with a correct code in C, if you don't go to "dirty" optimization ? A nano second ? There's code where it still make a difference, but not this much.
"Make it fast" nowadays mostly imply a change in the language you use, while at first it was more intended as "use the most speed efficient algorithm". But unless you're using a script language
and totally messed your algorithm, nowadays changing the algorithm imply a totally insignificant gain in terms of speed.
And it's the same for Martin's theorizing. I tend to agree with him most of the time, but I know that the junior at works look them strangely. We come from another age, and have a totally different conception of "computer science" than them.
Wow, I knew Perl is known for being a bit "dirty", but your code example is next level scary.
I guess that it's not the time to tell you that I have a whole module where the code for the object is wrote (on the main scope, not its own) at execution time and partly based on variables, for it to adapt to the current situation
Yes, Perl is "dirty". To my knowledge it's the only language for which it's own author confess that he absolutely don't know what it can do. Not in the way that he don't know its capabilities, but because he don't know its limits.
I did some AWK programming at school 20 years ago, so maybe I know a little Perl already
In fact, we all know a little bit of Perl. While C is the father of most modern language, in a way Perl is the big brother, the one that experienced everything before anyone else. Not all what Perl did have been implemented, but moderns languages still took what was the most interesting.
I was thinking more along the lines of
You must be registered to see the links
describing the state of the whole system.
Then yes, Perl testing is really near to tooling, especially with the pretty explicit name of the test functions. And since there's (globally speaking) already a test for anything, you aren't limited to basic states. You'll have a bunch of
is
and other
ok
tests for the intermediate process, and end your test suit with an
is_valid_json
that will tell you that in the end you effectively product, well a valid JSON structure.
What lead you to possibly have a test suit that simulate the whole process of your software. A process that will be validated at each step by validating its actual state. Something that can look like :
Code:
$inputBuffer = "some test string";
lives_ok( acquireData(), "test name." );
is( $intermediateBuffer, "some test string", "test name." );
lives_ok( processData(), "test name." );
is( $intermediateBuffer, "now processed string", "test name." );
stdout_is( sendData(), "final result", "test name." );
What obviously help you to keep the whole picture in mind. And if you've more than one state to validate, just group the tests :
Code:
test "Data acquisition", sub{
lives_ok( acquireData(), "test name." );
is( $intermediateBuffer, "some test string", "Data received." );
ok( $waitingForData, "Back to waiting state." );
};
If all the tests in the group pass, you'll have only one line in the test summary, telling you that the group past. Else, you'll know what test have failed in the group.
And the fact that you haven't to deal with the result itself also help. You don't have to write the "pass"/"fail" output, what mean that, in a way, you don't even have to know the success condition. Whatever if you know or not what is a valid JSON, it's the task of
is_valid_json
to know that, and it will tell you if you failed. You can then focus on what matter, "is this correct, whatever 'correct' mean".
It's a bit scary that Perl is such a high level language that it can do all that, but has no guards to prohibit misuse. I kind of get why Python is more popular among coding beginners then Perl.
Personally I see them at the two opposite limits of coding.
With Perl you're free to do whatever you want, and to do it in whatever way you want. But it will be "dirty" and you probably should never look at what you wrote two years ago, even you will be scared. And with Python you have really few freedom, but every time it will be the cleanest code you've ever wrote ; well, unless you're like me, corrupted by Perl.
What fallback to the "too old to theorize" part above. They are two radically different approach that correspond to two different epoch. Not in regard of their creation date, that aren't this different (87 for Perl, 91 for Python), but in regard of their popularity. Perl was almost instantly popular, while it took almost 20 years for Python to effectively rise.
Perl was popular in the 90's and early 00's, that were a time a pure experiments. And what's better for experimentation, than a language that have near to no limits, while still being robust and (relatively speaking) fast ? It was also the start of Internet, with mostly plain text protocols ; Perl being initially made for text processing, it's perfect ! And like coding, and more globally computer use/administration, was a question of passion before being a job, the "dirty" side of Perl wasn't really a problem.
It was in fact more an advantage. When you wanted to know if "it's possible to do this", you knew that Perl would not be the reason why it's impossible. Of course, you could have use C, it also wouldn't have been the reason why it's impossible, but you would have needed at least 10 more times to do it. And, while passing one hour to see if a stupid idea that crossed your mind is possible isn't this unreasonable, passing a full day for this is less tempting. Therefore, you tried with Perl, and once you knew that it was possible, switch to C in order to effectively do it, and do it correctly this time.
But nowadays, experimentation are over. Internet is stable and robust, new protocols send and receive binary data, and computers are everywhere. What is needed is a structured foolproof language, because half (if not more) of the peoples who'll have to use it aren't following their passion, they choose a job. And their job will potentially impact the life of millions peoples, so it need to be done in the cleanest possible way. And Python is teaching that, what will be seen when they'll use compiled languages.
Do I understand correctly that the power of Perl is in it's ability to treat all data as text and run Regular Expressions on it? That combined with that everything is open/global/not restricted so you can inspect all parts of the system?
Globally yes. As I said, it's initially a text processing language, and since text is just a special kind of binary value, it can process any kind of data. This more or less easily depending of what you want to do, but RegEx can test none character Bytes, so... Add its flexibility, the fact that Unix-like systems almost always have a way to access the information in plain text, while most protocols used in networking are at least partly in plain text, and... well, what are the effective limits except the speed ?
The early monitoring systems were in Perl, and still nowadays some have at least a way to accept Perl's extensions. The early
You must be registered to see the links
where in Perl, even for binary processing. And so one.
I know you are a Python expert too and Perl is falling by the wayside, so would this be possible in Python? It's of similar high level, but is seems more guarded/closed off.
Python expert, it's quick to say. I know many things, but I'm far to be an expert. And I know so much mostly because, coming from Perl, I don't accept when I have to face a "it's not possible to do this" ; it totally hurt my conception of coding language
As for the possibility to use Python for a Perl-like test suit, theoretically yes, but in practice not really. The problem isn't in the feasibility, technically it would be easier and probably more robust to do it with Python ; I even have a small Test::More port for Ren'py ; that I have to totally rewrite, alas. No, the real problem is that Perl have 3 decades of modules solely dedicated to testing, while Python have, to my knowledge, a single test library that, globally, limits to
is
and
ok
assertions. Therefore, to reach the level of simplicity and flexibility you can reach with Perl, you've first to write your own libraries.
But I don't think that one day the number of test library will increase. The lack of "deep" test libraries for every language except Perl already show that tests aren't really used. The Martin's note you linked previously regarding TDD also imply the same. And now have to be added the "
You must be registered to see the links
" approach of Python, that tend to relegate test to an old age ; if things goes west, there will be an exception, I just need to catch it and react. And if my code don't produce the expected result, I'll saw it. Therefore why should I bother to test ?
If you look at Ren'py, so a game engine ported to 5 OSes, it have only 6 series of tests... Less than 40 KB of codes, including the comments ; for comparison, the part processing the audio need more than 50KB. And all are basic and totally coupled to the effective task they tests. Is something working on Windows and not on Linux ? The only way to know it is to wait for the "hey, it don't works" from the users. The version 7.4.2 was released solely because the 7.4.1 had a regression.
Nobody test nowadays, so nobody will write test libraries. And it's a vicious circle, because the reason why nobody test is mostly due to the fact that there's a real lack of test libraries.
Going back to Ren'py, it have a tons of functions dedicated to the display. Relatively speaking it's easy to test if the positioning is correct. Take a screenshot, and compare it to a screen that you build yourself, and where each elements are perfectly positioned. Ren'py can take screenshot itself, so you can have an automated test for this, to ensure that the position are correct whatever the OS. But you've to write the code that will compare the two images, including a 1 pixel variation to take count of a possible the approximation factor. Therefore, you don't test.
But if this code was already available you would just have to write something like:
Code:
[position this here, that there, ...]
saveActualScreen( "actual image.png" )
compareImage( "actual image.png", "reference position.png", approximation=1 )
and so you would probably do it. This simply because it would need you less than 10 minutes, against probably more than a whole week if you had to write the code for the
compareImage
part and test it seriously to ensure that it don't return wrong result.
Well .Net has integrated
You must be registered to see the links
witch are massive blocks in between your functions. [...]
It's really a too long time that I've dropped compiled languages, it's start to really show. I still use C/C++ at works, but mostly as secondary coder, I'm more a project manager and in charge of the tests, so I don't really tried to update my knowledge.
Again, a very powerful thing for Perl, but you have to have the knowledge to use it and the wisdom not to overuse it.
Well, it's Perl, the "knowledge versus wisdom" is implicit
Edit: Oops, had closed a
quote bloc that was in fact a
code one.