One danger of taking a break on a project

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
10,384
15,294
Also the willingness to optimize for size and speed is a sign of perfectionism and that may be good but can lead to trouble.
As Alan Kay said, "make it work, make it correct, make it fast, make it cheap".

Unless you've decades of experience and so can do the optimization "by instinct", because you've wrote this kind of code thousand of times and know exactly what will works or not, premature optimization is (almost) only a way to loose time. You'll struggle to make the code effectively works all the time (so including the possible exceptions that should never happen, but that you'll always encounter at the worst moment), while struggling to optimize it.
This while a code that is already working all the time (so work and is correct), is easier to optimize. Firstly because you already know where are the bottleneck (fast), and the uneasy part (cheap), and secondly because you already have the test suit to validate that it still works all the time.
 

QQP_Purple

Well-Known Member
Dec 11, 2020
1,256
1,477
As Alan Kay said, "make it work, make it correct, make it fast, make it cheap".

Unless you've decades of experience and so can do the optimization "by instinct", because you've wrote this kind of code thousand of times and know exactly what will works or not, premature optimization is (almost) only a way to loose time. You'll struggle to make the code effectively works all the time (so including the possible exceptions that should never happen, but that you'll always encounter at the worst moment), while struggling to optimize it.
This while a code that is already working all the time (so work and is correct), is easier to optimize. Firstly because you already know where are the bottleneck (fast), and the uneasy part (cheap), and secondly because you already have the test suit to validate that it still works all the time.
Not just a way to loose time but a way to make your code less optimized. Trust me. I've been programming since I was in elementary school. Here are some rules I picked up in the decades since:

1. Things that should newer happen WILL. OFTEN.

2. There are two types of programmers. Good ones and clever ones.
The cleverer you think your being the less clever you really are.
If your solution is uninspired and simple you are on the right track.

3. Unless you are using Assembly language your optimizations won't be as good as what the compiler can do. Yes, you should try and avoid obvious antipatterns and bad practices. And yes you should not be stupid like using excel as a database. But at the end of the day you are optimizing for human readability and design quality on a high level. Let the compiler, a tool designed by people for the specific job of turning your pretty text into an optimized program at the low level do its job.

4. Frameworks, compilers, editors and other tools are created by people smarter than you or I for the purposes of making life easy for us. They work. Trust them. Learn them. Use them. Love them.

5. Newer do something your self if someone has done it before you and done it better.

6. Everything has been done before and done better.

7. Read again in order 6 =>2 => 4 => 5.
 
  • Like
Reactions: Luderos

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
10,384
15,294
Not just a way to loose time but a way to make your code less optimized. Trust me. I've been programming since I was in elementary school.
Don't take it bad, but when I read that kind of introduction, I tend to think, perhaps wrongly, that, coding since now near to 35 years, I past more time coding, that yourself past time living.
Just say what you have to say, no need to assert the reason why you are saying it. We are all anonymous here, nobody know if you really have that kind of experience, like nobody know it I really have 35 years of experience. It's not on your resume that people will judge you ; or if they do, it will be to question the reasons that made you talk about it.


3. Unless you are using Assembly language your optimizations won't be as good as what the compiler can do.
Optimization isn't just a question of speed, and not all languages are compiled. Alan Kay end with "make it cheap", that he could have said, "make it easy to use". Depending of the software you're working on, the paradigm(s) you have/decide to follow, or, like Marzepain said, depending of the level where the code is designed, optimization take a different meaning.
It's only at language level that it effectively regard speed. With any step down you take on the level, the speed become less and less important.
Therefore, at library level, that is just one step down, the cheapness of the interface to this library start to rise and remove some importance to the speed optimization. There's no gain to expect from a library that goes at the speed of light, but need a full month of study just to understand how you can use it. This kind of optimization is limited to critical systems, and if you works on critical systems, you generally use code that you wrote yourself, or that have been wrote internally ; code that can just be a rewrite of the said library. This firstly because you need to be sure that it do exactly and at 100% what you expect it to do. Secondly because a library is always more generic than your needs, therefore it will always be a little more slower that what you can do yourself by focusing on your sole needs.

To take only one example, and make it be duck typing since the most used language in this scene is Python, optimization also mean that your classes will be duck type compliant. This imply that, while the code of the classes themselves can possibly be more complex or less optimized, all the code that will have to use them will be, it, way more simple. It don't have to take count of the differences and exceptions. It don't have to be split between the different possibilities implied by the different classes it have to deal with.
Like it's the part that you'll pass the more time working on, you're optimizing the devel of the software you're working on. And like, while it's not the core of your code, it's still the biggest part of it, you're also reducing the possibilities of bugs ; if your code works for one class, and your classes are effectively working and correct, then your code will works for all the classes. You also remove all the, "oh, yeah, by the way, what this method of this class expect/return ? I don't remember it right now", since all classes expect the same informations, in the same order and from the same type, while returning either nothing or the same type, in the same order.
 
  • Like
Reactions: Marzepain

QQP_Purple

Well-Known Member
Dec 11, 2020
1,256
1,477
Don't take it bad, but when I read that kind of introduction, I tend to think, perhaps wrongly, that, coding since now near to 35 years, I past more time coding, that yourself past time living.
Just say what you have to say, no need to assert the reason why you are saying it. We are all anonymous here, nobody know if you really have that kind of experience, like nobody know it I really have 35 years of experience. It's not on your resume that people will judge you ; or if they do, it will be to question the reasons that made you talk about it.
You are not too far off actually. But compared to most people here we are going to be veterans. Which is exactly the point of asserting our experience.

This forum is full of people who are still young to the trade on account of the primary game making and modding technologies such as unity being young. And there is nothing bad about that. Indeed that is a wonderful thing. But it is also an opportunity, if not duty, for us veterans to try and pass on some of our experience to them. If we can help them avoid the mistakes of our past than we should.

And the point of us asserting experience in that context is to hopefully get them to sit down and listen.

As for the rest I don't disagree with you. Also I didn't disagree with you previously either.
I was merely making an addendum to your post in the form of a "clever" ( and yes, I am aware of the irony of proving my own point by my own example ) statement aimed at the newer programmers in the audience.

And in my experience they tend to treat "optimization" as a buzzword for speed rather than realizing the many different facets of it. Such as for example what are you optimizing for? Speed? Usability? Maintainability? Extendability? Reusability? Churning it out before the deadline so you don't have to do overtime?

But we could go on like this forever. :)
 

Diconica

Well-Known Member
Apr 25, 2020
1,100
1,150
3. Unless you are using Assembly language your optimizations won't be as good as what the compiler can do. Yes, you should try and avoid obvious antipatterns and bad practices. And yes you should not be stupid like using excel as a database. But at the end of the day you are optimizing for human readability and design quality on a high level. Let the compiler, a tool designed by people for the specific job of turning your pretty text into an optimized program at the low level do its job.
I'm going to disagree in part with this.

The compiler is limited to the code you implement in it. It can only optimize that code it is given. It doesn't understand the final purpose or reason for writing the code. Thus unless you write the code in the most optimized procedure you can it can't possibly reach the best optimization.

Also the final size of your program and its function can have a massive impact. If you can make the program and most its data fit in the processor memory it will run vastly faster than if it has to do read and writes back and forth to ram or worse yet drive read and writes be it because of paging or simple file access.

Example: Lets say you are generating a map. It can be vastly faster to break the map up into sections you can generate that fit inside the processor cache than generating the entirety of the map. Another example. Lets say you want to generate 100,000 rooms in a dungeon check for collision and then connect them with halls. Well doing all 100,000 vs breaking it up into sections can take vastly longer. The reason is each room would need to be checked against all the other rooms. That's an exponential increase in workload. Pathing also becomes more complex. However, breaking it into sections limits the complexity after the sections are complete you can simply place them near one another and then run connections between those.

Then you have multi-threading. There are a lot of ways people implement it. Most aren't as efficient as they can be.
This is a great video were Sean Parent shows the difference in performance with various ways.


Coding methods can also play a factor.
Take using OOP(object orientated programming) vs ECS(Entity Component System)
Both methods have great features but ECS can provide better performance and lends much better to optimizing for Multi-threading applications.

Simple optimizations, such as passing by const reference vs passing a copy. Can compilers do that sure we can write a compiler to do it but the person going through and putting those words in saves the time the compiler would need to analyze the code and then determine if it is safe to make that change.
You might not think that is much but every time a variable is passed as a copy it has to allocate space and transfer data. If you are doing something millions of times that's costly. If you have a bunch of pass by values in your program its going to run like ass. You also have stuff like inline. Inline can be a double edged sword. It can improve performance by eliminating a function call. However some compilers will do it automatically. However, inline the wrong function and you can increase the size of a program drastically. If the function is just used in one location in your code then inline isn't bad. Most modern compilers might do it on their own. However, lets say that same function is used 1000 places in your code. Well then your code is going to grow by the size of that function 1000 times. Not good if you want to keep the program small enough to be contained in processor cache.

The point is compilers aren't at the point, they get rid of bloat code, they don't change the way you access and use data, they don't reorder or change operations to deal with issues like exponential work loads and they can modify code to be more optimized for multi-threading, they also don't change the way multi-threading is implemented. Even simple optimizations can make a large difference.

To give you an idea of how much this can make a difference. One project of mine was taking over 30 minutes with compiler optimization set to O3. When I finished and rebuilt it took 0.83 seconds.
In short you have a huge amount of room to make a difference. If you feed crap code into a compiler you will only get a crap version back out of it.

As for readability.
Which is easier to read project that has 100 lines of code or one bloated that has 2000 lines of code?
You might think that's a stretch or a joke it isn't in the least bit. My son's school need a site built that had a calendar and gallery, and a news system and more. So they got to looking at all the various options they could by or find open source.
I built the site for them instead. It was easier to maintain and expand / add features to. I performed better.
The code as better organized. It's a lot easier to search through something that is 1/20th the size.
 
Last edited:

Diconica

Well-Known Member
Apr 25, 2020
1,100
1,150
I've seen this sentiment among programmers many times and the positive about it is the willingness to learn their trade. Many newbie programmers hack something together and when stuck find the shortest piece of code on StackOverflow.
Also the willingness to optimize for size and speed is a sign of perfectionism and that may be good but can lead to trouble.

I have 2 points to make:
1 Optimization for results in a business sense
The question is "What are you trying to achieve?"

2 Optimization for reuse in a design and ergonomic sense
The question is "Who is going to use your (code) product and how are they going to use it?"

The first makes tradeoffs possible like sacrificing speed or memory use for earlier delivery. Also having components or subsystems is inherently bad for performance, because it leads to local optimization of those components and systems, while the system as a whole may suffer.
As side note, Elon Musk started out as a programmer that was doing things in ASM and C, hating OO, but if he was still doing that he would not have a chance to reach mars.

The second is very hard to understand for somebody who is just trying to make it work. Your .Net remark really stung, because you probably don't know the situation of C/C++ before that (or java for that mater). I still have the CD of the the short lived .Net 1.0 version at home. The design of .Net standardized many language improvement that where floating around academia for decades. For instance, using a memory manager for C++ was possible for years before .Net, but it was little used and cumbersome.

I have to admit the whole thing took a turn when the managers got a hold of it and it became even worse when it became a thing for the marketing people. Luckily the managers and marketing people are infatuated with the cloud these days. The .Net Core versions are a lot better for it.

Designing code can be done on many levels. Consider this hierarchy:
  1. Language designer
  2. Library designer
  3. Framework designer
  4. Solution architect
  5. IT Pro/Maintainer
  6. Enterprise architect
On all those levels there are tradeoffs to make and they impact the people that work with the code. Many programmers are actually Solution architects, but they are either throwing tools, scripts, workflows or even people at the problem like a Enterprise architect or they are building frameworks, libraries or even there own language to solve the problem.
Judging the performance of anything involves defining the situation. If you want to judge a programmer you really need to define the situation. The same goes for a solution that is created for a problem situation.
Actually I've been using ASM, C ,C++, Basic, pascal, since 1983 Since then picked up python, PHP, JS, Java, C#(stopped using it) and a number of other languages.

Optimizing for results in a business sense. You mean light reducing the amount of work your processor has to do so that the company saves money on server load and power both. Yea, see when you truly optimize your code for performance it effects a hell of a lot more. Don't believe me ask google, amazon, facebook ... and others.

Rushing to deliver a product that is sub par can result in a customer going to another vendor. Also not smart for business.

Generally when you optimize for performance you get a couple benefits with it a smaller amount of code thus less crap to read through and easier to maintain. Which also tends to make it also easier to reuse.
Optimized code usually compiles faster. Secondly, if you already know it is optimized you aren't going to spend time later going back to it trying to squeeze out performance. The less code you generally have the less bugs you tend to have and since it is smaller the easier it is to find and fix them.

I've worked for various businesses and agencies and on different projects, embedded systems, OSes, compilers, encryption, graphics, database systems, ... and more. I have never once found a reason to compromise the way you are making out. A couple times management insisted on it only later to regret it and spend more time going back and doing it the way I said it should have been done to start. Hell, one of those managers the company went after for damages.

Basically when I hear arguments like yours and the so called benefits people tout I tend to think they are false benefits because for the most part they would get the same thing if they did it right to start with and optimized plus they would get more out of it.
 

Marzepain

Newbie
May 4, 2019
61
49
As Alan Kay said, "make it work, make it correct, make it fast, make it cheap".
Thanks for the quote. I got "make it work, make it correct, make it fast" from a keynote by . Didn't know it was from Kay, but it seems likely as Martin reuses things if they are good.

Unless you've decades of experience and so can do the optimization "by instinct", because you've wrote this kind of code thousand of times and know exactly what will works or not, premature optimization is (almost) only a way to loose time. You'll struggle to make the code effectively works all the time (so including the possible exceptions that should never happen, but that you'll always encounter at the worst moment), while struggling to optimize it.
This while a code that is already working all the time (so work and is correct), is easier to optimize. Firstly because you already know where are the bottleneck (fast), and the uneasy part (cheap), and secondly because you already have the test suit to validate that it still works all the time.
Completely agree.
I see the make it cheap part as packaging the algorithms you made into a service or library everybody can use, thus the functionality being made cheap for everybody to do.
Having the test suite does mandate that the tests are not tightly coupled. I call that more BDD instead of TDD, but I just read article from Martin that states that having tightly coupled test is a sign of inexperience and TDD is not an excuse to forget about designing code.
 
  • Like
Reactions: anne O'nymous

Marzepain

Newbie
May 4, 2019
61
49
Not just a way to loose time but a way to make your code less optimized. Trust me. I've been programming since I was in elementary school. Here are some rules I picked up in the decades since:

1. Things that should newer happen WILL. OFTEN.

2. There are two types of programmers. Good ones and clever ones.
The cleverer you think your being the less clever you really are.
If your solution is uninspired and simple you are on the right track.

3. Unless you are using Assembly language your optimizations won't be as good as what the compiler can do. Yes, you should try and avoid obvious antipatterns and bad practices. And yes you should not be stupid like using excel as a database. But at the end of the day you are optimizing for human readability and design quality on a high level. Let the compiler, a tool designed by people for the specific job of turning your pretty text into an optimized program at the low level do its job.

4. Frameworks, compilers, editors and other tools are created by people smarter than you or I for the purposes of making life easy for us. They work. Trust them. Learn them. Use them. Love them.

5. Newer do something your self if someone has done it before you and done it better.

6. Everything has been done before and done better.

7. Read again in order 6 =>2 => 4 => 5.
I tend to agree on most of the points. It's a bit cynical, but most times it's true.
I think point 5 and 6 take it too far on occasion. There are times that you just have to bite the bullet and go for it. First having done some (market) research at least. This occasion occurs often when searching for a detail of a detail of a detail and so doing gotten of the beaten path. Unfortunately details can be important and a programmer can get stuck on them.
Also business is often trying to do something innovative, trying to get ahead of the competition instead of doing something tried and true. That doesn't have to mean that the building blocks are all new, but many times there are some new blocks required.
 

Marzepain

Newbie
May 4, 2019
61
49
Actually I've been using ASM, C ,C++, Basic, pascal, since 1983 Since then picked up python, PHP, JS, Java, C#(stopped using it) and a number of other languages.

Optimizing for results in a business sense. You mean light reducing the amount of work your processor has to do so that the company saves money on server load and power both. Yea, see when you truly optimize your code for performance it effects a hell of a lot more. Don't believe me ask google, amazon, facebook ... and others.

Rushing to deliver a product that is sub par can result in a customer going to another vendor. Also not smart for business.

Generally when you optimize for performance you get a couple benefits with it a smaller amount of code thus less crap to read through and easier to maintain. Which also tends to make it also easier to reuse.
Optimized code usually compiles faster. Secondly, if you already know it is optimized you aren't going to spend time later going back to it trying to squeeze out performance. The less code you generally have the less bugs you tend to have and since it is smaller the easier it is to find and fix them.

I've worked for various businesses and agencies and on different projects, embedded systems, OSes, compilers, encryption, graphics, database systems, ... and more. I have never once found a reason to compromise the way you are making out. A couple times management insisted on it only later to regret it and spend more time going back and doing it the way I said it should have been done to start. Hell, one of those managers the company went after for damages.

Basically when I hear arguments like yours and the so called benefits people tout I tend to think they are false benefits because for the most part they would get the same thing if they did it right to start with and optimized plus they would get more out of it.
We might have a different definition for Performance.
It seems your definition of Performance is coupled to elegance championed by leaving out anything unnecessary and thus being left with the most optimal code. I would see that as the "Make it correct" stage.
In the "make it fast" stage I would do whatever it takes to make it fast and that is often not nice code. Adding hacks, altering the architecture in uncommon ways, dropping into ASM, etc.
Every application can be made faster by hand optimizing the ASM that comes from the compiler. Given enough time and effort it's always possible to outsmart the compiler, because you start with the best of what the compiler can give you and you add to that.

As for your opening remark about commenting code and making documentation, in the "make it work" stage understanding is low, comments are often for yourself and mediocre. Things should not be left in this state, but high pressure or lack of motivation make programmers do this.
In the "make it correct" stage your understanding is high and it is reflected in the code and architecture, this having little need for comments and documentation. Maybe the most extreme case is 's where the code is readable as a natural language. It would be great if programs where left in this stage.
In the "make it fast" stage your understanding of the what is needed and what the machine is doing is at it's maximum. The code is optimized for the machine, making it necessary to add comments and documentation for humans. There may be a need for this stage, but often it is too costly.
In the "make it cheap" stage the functionality would be packaged in a service or library for others to use. This would make it "cheap" for those others to get the same results as you did. It does however take the maximum effort for designing the code and architecture as to make it intuitive for users, coupled with good documentation. Otherwise others would not be able to do much with it. This stage is almost never a viable option, but there are gains to be had releasing an WebApi or library to the public.

I do get the feeling that you like a imperative/procedural style of programming. I just read this article and this one that reduce OO to managing function pointers and FP to managing assignment.
What do you think of those are artificial limits for programmers that as they will stand in the way of performance improvements?
The machine knows no management of code things like classes or functions, only instructions.
Somebody like (minute 28:46) must have it all wrong when talking about abstractions layers.
 
  • Like
Reactions: anne O'nymous

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
10,384
15,294
Having the test suite does mandate that the tests are not tightly coupled. I call that more BDD instead of TDD, but I just read article from Martin that states that having tightly coupled test is a sign of inexperience and TDD is not an excuse to forget about designing code.
I tend to disagree with him on the tightly coupled part, but it's probably mostly because I come from a world where testing is, relatively speaking of course, a piece of cake, Perl, and imported it everywhere I can. Tests are still as boring to write, but it's fast and easy to write them, thanks to the tons of modules solely dedicated to them. And with the Test Harness structure, they are really easy to understand.
The main difference it make, is that the whole picture isn't lost when you write your test, because you don't loose time with the base of the test itself. This while with other languages, even if with time you start to build your own library for the basic tests, you always design them with, in mind, the situation that need it right now ; designing a more generic test would be a waste of time at this moment, and writing test is already enough of a pain in the ass without that.

Or perhaps that I don't disagree this much, all is in the coupling level you put behind "tightly coupled". I mean, since your using, I don't know, Test::Exception's dies_ok, your test will just validate that your code effectively die, and after that you'll validate that it log correctly, or whatever you did to provide the information regarding this death ; what once again is made easier since it's Perl, you can focus on the information whatever its structure. Write a single line, and you've your test.
This while, if you have to write your own code to test if this death happen, you'll tend to focus on the way your code died. Then, effectively, if you change the code, your test will fail. Not because you made an error in the code, but because your test is now obsolete. What lead to what Martin said, you'll always change or refactor your code for it to not fail the test ; and more globally design it for this purpose.

I'm far to be as good at that than him, but the senior who introduced me in Perl testing when I started working, insisted on the fact that, when you write your test suit in a TDD approach, you aren't preparing the validation of your code, but you're writing the algorithm of your software. What is both "tightly coupled", since your test suit is totally dependent of this algorithm, but also not really coupled, because it's totally independent of the way you'll implement this algorithm, and so of the code you'll wrote.
To keep my dies_ok example, whatever when and how you make your code die, what is effectively tested is that it die. You can do the validation before everything else, or right before it's needed, it doesn't matter. You can decide to write a generic validation function, use a self validating object, as long as the death is correctly propagated, your test will still pass as expected.
And it's probably where the fact that I mostly use Perl for the test suits, even when they've to apply to compiled code, play a role. The fact that each test have to be explicitly named really help you to keep the whole picture in mind. Well, to be honest, should since I tend to have too many "functionName - also accept float"-like name, where they should be more on the "functionName - accept floats coming from otherFunctionName" side.
But the important part is that, because you've to name your test, you've to remember why this test exist. And the answer isn't "because I want to write the code like that", but "because the code need to works like that". This "need" have a reason, that come from the rest of the program.


In the end, to fallback on the original topic, even if you do it after writing your code, having a test suit is a way to avoid parts of the consequences of a too long break far away from your project. At least for the code part of it.
If your code suit is not too badly wrote, it should remind you the reason why "this". And if you include regression tests, it's less important if you don't remember that "this have a really good reason to be weird", the test will tell you that you shouldn't have made it less weird.
 
  • Like
Reactions: Marzepain

Marzepain

Newbie
May 4, 2019
61
49
I tend to disagree with him on the tightly coupled part, but it's probably mostly because I come from a world where testing is, relatively speaking of course, a piece of cake, Perl, and imported it everywhere I can. Tests are still as boring to write, but it's fast and easy to write them, thanks to the tons of modules solely dedicated to them. And with the Test Harness structure, they are really easy to understand.
The main difference it make, is that the whole picture isn't lost when you write your test, because you don't loose time with the base of the test itself. This while with other languages, even if with time you start to build your own library for the basic tests, you always design them with, in mind, the situation that need it right now ; designing a more generic test would be a waste of time at this moment, and writing test is already enough of a pain in the ass without that.

Or perhaps that I don't disagree this much, all is in the coupling level you put behind "tightly coupled". I mean, since your using, I don't know, Test::Exception's dies_ok, your test will just validate that your code effectively die, and after that you'll validate that it log correctly, or whatever you did to provide the information regarding this death ; what once again is made easier since it's Perl, you can focus on the information whatever its structure. Write a single line, and you've your test.
This while, if you have to write your own code to test if this death happen, you'll tend to focus on the way your code died. Then, effectively, if you change the code, your test will fail. Not because you made an error in the code, but because your test is now obsolete. What lead to what Martin said, you'll always change or refactor your code for it to not fail the test ; and more globally design it for this purpose.

I'm far to be as good at that than him, but the senior who introduced me in Perl testing when I started working, insisted on the fact that, when you write your test suit in a TDD approach, you aren't preparing the validation of your code, but you're writing the algorithm of your software. What is both "tightly coupled", since your test suit is totally dependent of this algorithm, but also not really coupled, because it's totally independent of the way you'll implement this algorithm, and so of the code you'll wrote.
To keep my dies_ok example, whatever when and how you make your code die, what is effectively tested is that it die. You can do the validation before everything else, or right before it's needed, it doesn't matter. You can decide to write a generic validation function, use a self validating object, as long as the death is correctly propagated, your test will still pass as expected.
And it's probably where the fact that I mostly use Perl for the test suits, even when they've to apply to compiled code, play a role. The fact that each test have to be explicitly named really help you to keep the whole picture in mind. Well, to be honest, should since I tend to have too many "functionName - also accept float"-like name, where they should be more on the "functionName - accept floats coming from otherFunctionName" side.
But the important part is that, because you've to name your test, you've to remember why this test exist. And the answer isn't "because I want to write the code like that", but "because the code need to works like that". This "need" have a reason, that come from the rest of the program.


In the end, to fallback on the original topic, even if you do it after writing your code, having a test suit is a way to avoid parts of the consequences of a too long break far away from your project. At least for the code part of it.
If your code suit is not too badly wrote, it should remind you the reason why "this". And if you include regression tests, it's less important if you don't remember that "this have a really good reason to be weird", the test will tell you that you shouldn't have made it less weird.
That's an interesting reply. I did not consider that tooling would be an option here. I have used , but I consider that more eye candy then what you explain. It does remind me a bit of the (POM), but in your case there are functions instead of objects in between the test and the production code. The POM handles often occurring changes in UI by throwing an abstraction layer at it. When the UI changes you only have to change 1 place instead of many tests. You have a more generic solution. It's also an abstraction, but that it's applicable for all situations is quite impressive. It's an eye opener.

I mentioned the Behavior Driven Development (BDD), because that goes really high level up to the Domain model and the actual feature requests. The thing is that writing in a tool like is "logical language that customers can understand" is a lot of times too much for a customer and gets me writing something else then code. In theory I'm all for it. In practice... the customer is generally not interested and knowledgeable enough, meaning he knows it's right or wrong, when he sees it not before. It is however better having an analyst write Cucumber language then writing a thick report I have to thumb through.
The problem is that BDD is too high level, leaning towards Integration Testing as it tests the whole functionality implemented by multiple units. I was looking for something a bit above the units, but below the functionality. Your functions and the POM give me some ideas, on what too look for in some testing methods or tools.

As for the topic of documentation, I think test are a great documentation tool. The test function names do get long though, sometimes badly readable long. I try to combat this by grouping them in classes and tagging them with Category attributes, but that may be a .Net - Visual Studio - Test Viewer only solution. Resharper has another Test Viewer in VS that is lacking in that front and outside .net & VS I can only guess, but it's probably other languages and environments have similar solutions.
 
  • Like
Reactions: anne O'nymous

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
10,384
15,294
Note: I react to two of your comments.

It seems your definition of Performance is coupled to elegance championed by leaving out anything unnecessary and thus being left with the most optimal code. I would see that as the "Make it correct" stage.
Yet, is Dijkstra's elegance still always the correct way nowadays ?

Take modern POO, by example, where classes aren't anymore seen as a component of a project, but more as a small library awaiting to be reused. It change the definition of "unnecessary", or perhaps increase the definition of "necessity".

Imagine a class that is designed to proceed buffers that will have at most 255 Bytes, because the class will never have longer data. It can be, by example, because the data come from an external source that can only send up to 255 Bytes. Of course, the data still can be longer, but if it happen it mean that there's a transmission problem, and therefore it's the acquisition code that have to validate this size ; whatever to drop the data or ask for them to be transmitted again.
Then, you'll have to works on a second program that have to do the same processing, and so can perfectly use this class. Except that the acquisition process is totally different and have absolutely no reason to care about the size of the data. Perhaps simply because the class isn't the only one that can possibly proceed the data ; "thisTag:data" goes for the class that expect 255 Bytes buffers, while "thatTag:data" goes for another class.
You're then left facing solutions that are all "unnecessary" ones, while still all being valid solutions :
  • It's not necessary to make your class deal with a buffer bigger than 255 Bytes, there will be none in the project you initially write the class for, and at this time there's absolutely no guaranty that this class will be used again one day.
  • It's not necessary to create a new class, even by inheritance, just to rewrite the code in order for it to works with buffer that have a variable size ; it should have been done right from the start.
  • It's not necessary to make the acquisition process of your second project validate the size when it know that they'll be proceed by "this class", because it's supposed to be the job of the class to reject data that it can process.


In the "make it fast" stage I would do whatever it takes to make it fast and that is often not nice code. Adding hacks, altering the architecture in uncommon ways, dropping into ASM, etc.
Totally agree, speed optimization is never nice code, never.
One of the common speed optimization done in Perl is to forget that scopes exist. To resume, parameters are past through an array where each entry point to the original variable. You're supposed to assign the values to a local variable, proceed them, then return the result. But since you've already a pointer to the original variable, why lost time creating a local variable, returning a value, then assign this returned value, when you can just say "fuck the notion of scope", and inject the value directly in a variable of the calling scope ?
And then you see codes like trim{ $_[0] =~ s/^\s+|\s+$//g; }, that you use by just writing trim( $a );. No need for an assignation, the value of $a is now a trimmed version of what it was before this line. It's totally dirty, and in the same time Dijkstra's elegant ; you've literally no unnecessary part, being even ride of some of the necessary ones.

If your code haven't turned into some kind of abomination, then you haven't effectively optimized its speed ; it's perhaps faster, but it can still be even more faster. What tend to imply that you haven't made it fast, but more made it finally correct.


--8<----8<-- Your second comment --8<----8<--


I did not consider that tooling would be an option here.
Is this really tooling ? And it's a totally legit question, because I'm not sure that someone really have the answer to this.

Of course, you use already wrote functions, and therefore tools, to write your test suit, but if you take some step back and look at what Perl test looks like, you've more the impression to look at a dedicated sub-language (especially if the module can use the modern syntax, what isn't shown here) :
Code:
    is( $whatever % 1, 0, "Even value" );
    ok( whatever(), "Linkable in case of success" );
    like( whatever(), qr/^.*Mr $firstname $lastname.*/, "Format name correctly" );
You're really and strictly validating assertions. This by opposition with other languages where you've to build the assertion in the same time that you validate it, then, depending of what you'll use, also handle how to react to the result of this assertion. And you're doing it explicitly, I think that you've no real difficulty to understand what those test stand for.
Those examples are more for unitary tests than anything else, but the principle stay the same whatever kind of tests you write.


What lead to the other point :
[...] but in your case there are functions instead of objects in between the test and the production code.
Perl is an old language, you can do POO with it, but by itself the language have absolutely no idea what OO mean ; OO is more a hack that have been added to the language than a real object implementation. This have a major implication.
When you use a module that is supposed to be an object, you can use its functions as method ($obj->method()) or as function method( $obj )). And if the module have been designed for this, you can also use them as pure function (method()) that will take their $self value from a variable global to the module scope, or that don't need it at all ; it depend of what they are doing. It's not this exceptional to find objects with some utility methods that will be part of the object, but can also be used as totally independent functions.
Basically speaking, changing a procedural module into an object is really easy. Write a new function in place of the import procedure. Put a $self = shift in top of all your functions, and you've an object. It will rely on value that are in the module scope, in place of being part of the object, but it will behave like an object. A careful "replace" to change things like $variableName into $self->{variable}, and the object now handle the data internally.
It's obviously not wise to do it this way, a real refactoring would be better. But it show relatively clearly that there's, from the point of view of the interpreter, no real difference between the FP design of a module and an OO one.
As side note, it's what made me laugh when reading your OO vs FP and FP vs OO links, because the difference have absolutely no meaning, and in the same time all the possible meaning, in Perl. Objects are just a hash structure where entries are either a value or a code, what mean that they are pure data. But in the same time, they are effective objects (include the interface or abstraction meanings), and, as I said, you can also perfectly use them in a pure FP way.

This being said, all of the test modules rely on the same object (Test::Builder), and are designed as functions more to facilitate the processing than anything else. Basically speaking, you write the processing of the test, then call the Test::Builder's ok method with the boolean result of the test and its name, the object will take care of adding the result to the test summary. And it doesn't matter that your tests come from different modules, they'll all be correctly numbered since they are all linked to the same main test object.

What lead to the fact that test modules in Perl are neither effectively objects, functions, tools, nor interfaces, and in the same time all of this.


The problem is that BDD is too high level, leaning towards Integration Testing as it tests the whole functionality implemented by multiple units. I was looking for something a bit above the units, but below the functionality. Your functions and the POM give me some ideas, on what too look for in some testing methods or tools.
It's now too late, Perl have reach its natural death with its really poor handling of Unicode, but if you don't need it and have some times to lost, you should take a look at its test abilities.
It's probably not true anymore, still due to the Unicode problem, but for years, and among many others, the Apache Foundation used Perl for their testing, this while they were mostly coding in C/C++ at this time. Same for projects like FreeBSD, my memory isn't really sure for OpenBSD and NetBSD, that had a bunch of Perl test suits for regression purpose.
It sometimes need that you include a temporary interface to the wild for your code, for Perl to catch the result, but it's really robust and easy to use.


As for the topic of documentation, I think test are a great documentation tool.
When it come to compiled language, yes. But for script languages I never found something better than Perl's POD (tags added right into the code) and Python's doctype (comments-like documentation directly added to the objects). This mostly because the doc and the code are mixed, so if you're looking at the code, you've also the documentation for this part, while the said doc still being available separately for this time where you need to apprehend the whole library/module/whatever.

I wonder why no compiler (to my knowledge) have been made able to handle something like Python's doctype. It's not really difficult. If you found a C++ comment right after a function/method/class/whatever header, instead of dropping it, you log it into a "[filename]_doc." file, including the said header. And you've a raw documentation file that can later, possibly from the makefile, be proceeded to generate a HTML page, a Postscript one, or whatever.
And if you want to do thing properly, you create a new comment tag, like by example a /*doc* ... */. And the compiler don't bother with comments that are effectively comments, only processing those that are embedded docs.
 
  • Like
Reactions: Marzepain

Diconica

Well-Known Member
Apr 25, 2020
1,100
1,150
We might have a different definition for Performance.
It seems your definition of Performance is coupled to elegance championed by leaving out anything unnecessary and thus being left with the most optimal code. I would see that as the "Make it correct" stage.
In the "make it fast" stage I would do whatever it takes to make it fast and that is often not nice code. Adding hacks, altering the architecture in uncommon ways, dropping into ASM, etc.
Every application can be made faster by hand optimizing the ASM that comes from the compiler. Given enough time and effort it's always possible to outsmart the compiler, because you start with the best of what the compiler can give you and you add to that.

As for your opening remark about commenting code and making documentation, in the "make it work" stage understanding is low, comments are often for yourself and mediocre. Things should not be left in this state, but high pressure or lack of motivation make programmers do this.
In the "make it correct" stage your understanding is high and it is reflected in the code and architecture, this having little need for comments and documentation. Maybe the most extreme case is 's where the code is readable as a natural language. It would be great if programs where left in this stage.
In the "make it fast" stage your understanding of the what is needed and what the machine is doing is at it's maximum. The code is optimized for the machine, making it necessary to add comments and documentation for humans. There may be a need for this stage, but often it is too costly.
In the "make it cheap" stage the functionality would be packaged in a service or library for others to use. This would make it "cheap" for those others to get the same results as you did. It does however take the maximum effort for designing the code and architecture as to make it intuitive for users, coupled with good documentation. Otherwise others would not be able to do much with it. This stage is almost never a viable option, but there are gains to be had releasing an WebApi or library to the public.

I do get the feeling that you like a imperative/procedural style of programming. I just read this article and this one that reduce OO to managing function pointers and FP to managing assignment.
What do you think of those are artificial limits for programmers that as they will stand in the way of performance improvements?
The machine knows no management of code things like classes or functions, only instructions.
Somebody like (minute 28:46) must have it all wrong when talking about abstractions layers.
As for my style of programming. I use different methods depending on the what needs to be done. Simple use the right tool for the job.
Also there is no such thing as OOP without procedural. To many people don't get that. Every time you handle the data other than the storage of it you are using a procedure.
Be it structured, OOP, procedural, ECS ... I'll simply use what works best for the task at hand.
There is no best one, just one is better than the other depending on the task.

20 years ago. I would continually write ASM that was called from C or C++ 10 years ago it started going down. The fact is a lot of code today coming out of compilers if you have your flags set correct is going to be what you would get if you hand optimized the ASM. If you do the types of stuff I listed before you will get in about 30 minutes of work a dozen times more performance than you will if you spend 2 hours on the ASM these days. I'm not saying don't work on the ASM if you need it that added speed then do it. But do the easiest part first then move to the ASM.

Moore's law. It depends on how you look at it. Single thread performance vs the amount of data you can process in a single instruction tick or multithreading. We are just now reaching the 5ghz barrier (not OC). It was 15 years ago we hit 3ghz.
so on single thread performance no its not holding true. However, we can process a lot larger instruction in a single tick now we have 64 cores and 128 threads on some CPUs in that way we have.

Back to your ASM. How many more lines of ASM do you have to go through to optimize it if you haven't first done the steps I brought up?

Jim Keller:
What he doesn't point out is that pytorch uses python reliant on cpython written in C. He fails to point out it runs slower than if you compiled it entirely natively in C or C++. So those abstraction layers come at a cost. Just like in C or C++ we could write that code snippet he showed into a function make it a library and call it with a single line of code.
He makes a statement about it only being because all these thousands of new transistors it could be done. Not exactly accurate.
His comment on using to create a fire up a data center and finding a cat photo. Is also wrong. People have ASM libraries and code repositories they know work. I get his point it is a lot easier. But, he isn't entirely correct in how he states everything. Last I checked we were able to do this stuff 20 years ago it just was slower. So the benefit is we can do more instructions at the same time and out of order and some stuff in parallel.
I think he just chose a poor way to make his point.
 
  • Like
Reactions: Marzepain

Marzepain

Newbie
May 4, 2019
61
49
Yet, is Dijkstra's elegance still always the correct way nowadays ?

Take modern POO, by example, where classes aren't anymore seen as a component of a project, but more as a small library awaiting to be reused. It change the definition of "unnecessary", or perhaps increase the definition of "necessity".

Imagine a class that is designed to proceed buffers that will have at most 255 Bytes, because the class will never have longer data. It can be, by example, because the data come from an external source that can only send up to 255 Bytes. Of course, the data still can be longer, but if it happen it mean that there's a transmission problem, and therefore it's the acquisition code that have to validate this size ; whatever to drop the data or ask for them to be transmitted again.
Then, you'll have to works on a second program that have to do the same processing, and so can perfectly use this class. Except that the acquisition process is totally different and have absolutely no reason to care about the size of the data. Perhaps simply because the class isn't the only one that can possibly proceed the data ; "thisTag:data" goes for the class that expect 255 Bytes buffers, while "thatTag:data" goes for another class.
You're then left facing solutions that are all "unnecessary" ones, while still all being valid solutions :
  • It's not necessary to make your class deal with a buffer bigger than 255 Bytes, there will be none in the project you initially write the class for, and at this time there's absolutely no guaranty that this class will be used again one day.
  • It's not necessary to create a new class, even by inheritance, just to rewrite the code in order for it to works with buffer that have a variable size ; it should have been done right from the start.
  • It's not necessary to make the acquisition process of your second project validate the size when it know that they'll be proceed by "this class", because it's supposed to be the job of the class to reject data that it can process.
Yes, point taken. Dijkstra's elegance is great for a algorithm, nice for an application, but difficult for a collection of components or a system with subsystems. As soon as subsystems or subcomponents are needed they will be optimized for their purpose ideally following the the and the whole will become less efficient. The wiki states "The reason it is important to keep a class focused on a single concern is that it makes the class more robust." and I think your example is a great illustration of that.
Would you agree that, in general, making code more Robust, would make it less efficient?

Totally agree, speed optimization is never nice code, never.
One of the common speed optimization done in Perl is to forget that scopes exist. To resume, parameters are past through an array where each entry point to the original variable. You're supposed to assign the values to a local variable, proceed them, then return the result. But since you've already a pointer to the original variable, why lost time creating a local variable, returning a value, then assign this returned value, when you can just say "fuck the notion of scope", and inject the value directly in a variable of the calling scope ?
And then you see codes like trim{ $_[0] =~ s/^\s+|\s+$//g; }, that you use by just writing trim( $a );. No need for an assignation, the value of $a is now a trimmed version of what it was before this line. It's totally dirty, and in the same time Dijkstra's elegant ; you've literally no unnecessary part, being even ride of some of the necessary ones.

If your code haven't turned into some kind of abomination, then you haven't effectively optimized its speed ; it's perhaps faster, but it can still be even more faster. What tend to imply that you haven't made it fast, but more made it finally correct.
Wow, I knew Perl is known for being a bit "dirty", but your code example is next level scary. It does remind me of I have written. Those really needed a comment block to explain to my later self what the hell I was doing.

Haha. While looking at the Regular Expressions wiki article it says, "Different for writing regular expressions have existed since the 1980s, one being the standard and another, widely used, being the syntax." Later it names a utilities like for processing. I did some AWK programming at school 20 years ago, so maybe I know a little Perl already :)

Is this really tooling ? And it's a totally legit question, because I'm not sure that someone really have the answer to this.
I think I know the sentiment, as a tool could get in the way of the the test. Instead of your Unit test testing your production code you're really testing the tool and creating a false negative or even worse a false positive.

I was thinking more along the lines of describing the state of the whole system. To clarify the diagrams covering only 1 object where only useful if you had a and that's a code smell, but for a subsystem, a package, or even an entire system it's would work to make the unit test more independent from the production code.
  1. The "tool" could bring the application in a test state. This could do a simple setup of objects or do more system like configuration or in gaming specifically load a save game.
  2. Then do the test with the production code, so it's untouched by the tool.
  3. Then the "tool" would test the state. In your previous example it was the Exit code and the log line. Those being 2 separate systems, with separate intentions, but the architect of whole the system has defined a State for the whole system to cover multiple subsystems.

Of course, you use already wrote functions, and therefore tools, to write your test suit, but if you take some step back and look at what Perl test looks like, you've more the impression to look at a dedicated sub-language (especially if the module can use the modern syntax, what isn't shown here) :
Code:
    is( $whatever % 1, 0, "Even value" );
    ok( whatever(), "Linkable in case of success" );
    like( whatever(), qr/^.*Mr $firstname $lastname.*/, "Format name correctly" );
You're really and strictly validating assertions. This by opposition with other languages where you've to build the assertion in the same time that you validate it, then, depending of what you'll use, also handle how to react to the result of this assertion. And you're doing it explicitly, I think that you've no real difficulty to understand what those test stand for.
Those examples are more for unitary tests than anything else, but the principle stay the same whatever kind of tests you write.

What lead to the other point :

Perl is an old language, you can do POO with it, but by itself the language have absolutely no idea what OO mean ; OO is more a hack that have been added to the language than a real object implementation. This have a major implication.
When you use a module that is supposed to be an object, you can use its functions as method ($obj->method()) or as function method( $obj )). And if the module have been designed for this, you can also use them as pure function (method()) that will take their $self value from a variable global to the module scope, or that don't need it at all ; it depend of what they are doing. It's not this exceptional to find objects with some utility methods that will be part of the object, but can also be used as totally independent functions.
Basically speaking, changing a procedural module into an object is really easy. Write a new function in place of the import procedure. Put a $self = shift in top of all your functions, and you've an object. It will rely on value that are in the module scope, in place of being part of the object, but it will behave like an object. A careful "replace" to change things like $variableName into $self->{variable}, and the object now handle the data internally.
It's obviously not wise to do it this way, a real refactoring would be better. But it show relatively clearly that there's, from the point of view of the interpreter, no real difference between the FP design of a module and an OO one.
As side note, it's what made me laugh when reading your OO vs FP and FP vs OO links, because the difference have absolutely no meaning, and in the same time all the possible meaning, in Perl. Objects are just a hash structure where entries are either a value or a code, what mean that they are pure data. But in the same time, they are effective objects (include the interface or abstraction meanings), and, as I said, you can also perfectly use them in a pure FP way.

This being said, all of the test modules rely on the same object (Test::Builder), and are designed as functions more to facilitate the processing than anything else. Basically speaking, you write the processing of the test, then call the Test::Builder's ok method with the boolean result of the test and its name, the object will take care of adding the result to the test summary. And it doesn't matter that your tests come from different modules, they'll all be correctly numbered since they are all linked to the same main test object.

What lead to the fact that test modules in Perl are neither effectively objects, functions, tools, nor interfaces, and in the same time all of this.
For a moment I thought your code example was a variation on the or , but these must all be assertions with extra functionality. It's a bit scary that Perl is such a high level language that it can do all that, but has no guards to prohibit misuse. I kind of get why Python is more popular among coding beginners then Perl.

It's now too late, Perl have reach its natural death with its really poor handling of Unicode, but if you don't need it and have some times to lost, you should take a look at its test abilities.
It's probably not true anymore, still due to the Unicode problem, but for years, and among many others, the Apache Foundation used Perl for their testing, this while they were mostly coding in C/C++ at this time. Same for projects like FreeBSD, my memory isn't really sure for OpenBSD and NetBSD, that had a bunch of Perl test suits for regression purpose.
It sometimes need that you include a temporary interface to the wild for your code, for Perl to catch the result, but it's really robust and easy to use.
Do I understand correctly that the power of Perl is in it's ability to treat all data as text and run Regular Expressions on it? That combined with that everything is open/global/not restricted so you can inspect all parts of the system?
I know you are a Python expert too and Perl is falling by the wayside, so would this be possible in Python? It's of similar high level, but is seems more guarded/closed off.

When it come to compiled language, yes. But for script languages I never found something better than Perl's POD (tags added right into the code) and Python's doctype (comments-like documentation directly added to the objects). This mostly because the doc and the code are mixed, so if you're looking at the code, you've also the documentation for this part, while the said doc still being available separately for this time where you need to apprehend the whole library/module/whatever.

I wonder why no compiler (to my knowledge) have been made able to handle something like Python's doctype. It's not really difficult. If you found a C++ comment right after a function/method/class/whatever header, instead of dropping it, you log it into a "[filename]_doc." file, including the said header. And you've a raw documentation file that can later, possibly from the makefile, be proceeded to generate a HTML page, a Postscript one, or whatever.
And if you want to do thing properly, you create a new comment tag, like by example a /*doc* ... */. And the compiler don't bother with comments that are effectively comments, only processing those that are embedded docs.
Well .Net has integrated witch are massive blocks in between your functions. VS used to ship with a tool to convert the XML to documentation, but they dropped support for it. The XML comments where hot in 2001 when XML was hot. The funny thing is that there was a proposal to move to the style for .Net Core and it was shot down as the XML comments where part of the language. I think it's a loss as the Javadoc style is much more elegant.
Looking at the is seems ok. I can see it's terse, but I haven't got the experience to know if it gets confused with code. Looking at it seems more like a template engine like then something I expect as documenting code. Again, a very powerful thing for Perl, but you have to have the knowledge to use it and the wisdom not to overuse it.
 
  • Like
Reactions: anne O'nymous

Marzepain

Newbie
May 4, 2019
61
49
As for my style of programming. I use different methods depending on the what needs to be done. Simple use the right tool for the job.
Also there is no such thing as OOP without procedural. To many people don't get that. Every time you handle the data other than the storage of it you are using a procedure.
Be it structured, OOP, procedural, ECS ... I'll simply use what works best for the task at hand.
There is no best one, just one is better than the other depending on the task.

20 years ago. I would continually write ASM that was called from C or C++ 10 years ago it started going down. The fact is a lot of code today coming out of compilers if you have your flags set correct is going to be what you would get if you hand optimized the ASM. If you do the types of stuff I listed before you will get in about 30 minutes of work a dozen times more performance than you will if you spend 2 hours on the ASM these days. I'm not saying don't work on the ASM if you need it that added speed then do it. But do the easiest part first then move to the ASM.

Moore's law. It depends on how you look at it. Single thread performance vs the amount of data you can process in a single instruction tick or multithreading. We are just now reaching the 5ghz barrier (not OC). It was 15 years ago we hit 3ghz.
so on single thread performance no its not holding true. However, we can process a lot larger instruction in a single tick now we have 64 cores and 128 threads on some CPUs in that way we have.

Back to your ASM. How many more lines of ASM do you have to go through to optimize it if you haven't first done the steps I brought up?
I totally agree that setting the flags for optimization on the compiler is a more efficient use of work time then going by hand directly. The point what I find strange is that you adamantly go for no wasted cycles in the application, while you do acknowledge that there is value in work time. Thus being efficient with work time is important to you, but there can be no tradeoff when it comes to application performance.
You said that you have worked on embedded systems, OSes, compilers, encryption, graphics, database systems. I know that those are all areas where performance counts, so it's logic that you value performance in the way you do.
Jet you have also worked with ASM, C ,C++, Basic, pascal, Python, PHP, JS, Java, C# and a number of other languages. Then you must have seen that specific languages are better suited for specific tasks, like PHP for building a website.
Also languages are better suited for certain kinds of developers, like Python is much more suited for beginners then say C/C++.
Languages and Compilers are just applications and they make tradeoffs in their design that are bad for performance, but better for ergonomics of the programmer. Optimizing for performance could defeat why you are using the language in the first place.
A famous example that stuck with me is, being generally slow, but everything being an object, you could attach code to the for loop object instead of in the loop code to make it iterate faster. To me that optimization defeated the purpose of Smalltalk being a simple to learn, simple to read language.
On the other hand there is code, where veteran Pythonistas preach a certain writing style for Python in line with the language ergonomics. They argue that for performance another language should be chosen.
Shouldn't you have a little compassion with yourself and your colleges and make some room for readability? Or is that is something comments and documentation is for?

Jim Keller:
What he doesn't point out is that pytorch uses python reliant on cpython written in C. He fails to point out it runs slower than if you compiled it entirely natively in C or C++. So those abstraction layers come at a cost. Just like in C or C++ we could write that code snippet he showed into a function make it a library and call it with a single line of code.
He makes a statement about it only being because all these thousands of new transistors it could be done. Not exactly accurate.
His comment on using to create a fire up a data center and finding a cat photo. Is also wrong. People have ASM libraries and code repositories they know work. I get his point it is a lot easier. But, he isn't entirely correct in how he states everything. Last I checked we were able to do this stuff 20 years ago it just was slower. So the benefit is we can do more instructions at the same time and out of order and some stuff in parallel.
I think he just chose a poor way to make his point.
For completeness about I also found a and an with Lex Friedman where he restates his argument of abstraction layers. There is also David Patterson's about Moore's law that shines a light on the situation.
The reason why I brought it up is that for the abstraction argument it matters little if it's hardware or software. His InfiniBand example (12:20) at his presentation is about optimization that crosses the OSI boundaries that didn't pan out and the later technology that did respect boundaries was a success. He is an architect of chips not of software so it may not apply, but he is successful respecting those layer boundaries.

There is also the other extreme, that if performance is all you care about, why not use an AI. Those are black boxes that can be fast. You get somewhat of a guarantee that it works correct, by training the AI. I wouldn't trust that kind or code to run a relational database, but it might be very fast.

By the way, I'm not trying to convince you that performance should be anything else as your top priority. You seem very successful with that belief. A F1 driver is that beliefs that all driving should be fast can be very successful. Though it may give some problems on normal roads.
 
Last edited:
  • Like
Reactions: anne O'nymous

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
10,384
15,294
Would you agree that, in general, making code more Robust, would make it less efficient?
Hmm... I'm tempted to answer that it totally depend who you are in regard of this code.
For the coder, no. Making the code more robust tend to make it more efficient, because at this level efficiency is both a matter of speed, easy interfacing, and being bug free. Things more easily achievable if you follow the single-responsibility principle.
For your boss, yes, it's less efficient. He would prefer a more generic code, that will be put on a "hey, those are codes that we can reuse" pile and make you works faster next time. For him, efficiency is in the reduction of the delays and costs.
And finally, from the point of view of the project manager, the answer is "both", because for him efficiency regard both the code itself and the time needed to write it. His preference for one or the other will depend if he have been promoted project manager after years of coding, or if he have a beautiful "certified project manager" diploma.

But more globally, I think that anyone with 20 or more years of experience is all too old to theorize about coding. Not because age by itself is a problem, but because the world changed too much during those two decades.
Take me by example, I started with the early 80's demo scene. Language, ASM or nothing, you need to have the smallest possible size and the fastest possible code. I learned to count the ticks, sometimes passing hours to optimize the code in order to gain one or two. This was mandatory at this time where CPU started at 2.5 MHz, and is ridiculous now that they tend to start at 2.5 GHz and are multi-core. It's not totally true, but nowadays, between its multi-core structure and frequency, the worse computer is at least 2000 times faster. And the same can be said for the RAM, that was below the MB at this time and now starts at 4GB.
The level of optimization we had to do at this time, even for basic software, totally lost its reason to exist. This with, obviously, the exception of critical software or critical part of the software ; a realtime 3D engine still need to have the 3D part overly optimized, but it's almost if you can't just use a script language for everything else. And it's what is sometimes used, an engine like Gamebryo, that wasn't the best one but neither the worst, take all its instruction from a dedicated interpreted script language, and still play smoothly games like Fallout New Vegas.

But it's not the only major change. When I started, home computing was at its early age. The father used the computer to plan his budget, the mother tried to see if she could use it to order her cooking recipes, and the children used it to play. Nowadays we have in our pocket a small device that permit us to do so way more ; the mother don't anymore care to order her cooking recipes, in less than one minute she have access to ten variations of the same recipe.
Passing at least a full year working on a software was the norm, now unless you're working on a big office automation system, on an OS or on a AAA game, if you need more that few months, you'll miss your market.

And, will we obviously aren't stupid and are perfectly aware of those change, we have more difficulty to understand what they imply, still theorizing like if we were in the 80's/90's. The whole data centered approach (forgot the name of the paradigm), I both understand it and not understand it. Yes, nowadays everything is data, but those data are proceeded by code, so why this need to radically change the center point ? There's one, I'm sure, I'm just to old, having past too many years focusing on the code, to be able to see/understand it.
And things like Kay's "make it fast" have less meaning nowadays. It's all relative because it's Python, so not only a script language, but also one of the slowest, but for my pleasure (and perhaps some bit of nostalgia) I used Python to write a raster class for Ren'py. Then I started to optimize it's speed. One day for a between 5% and 10% speed increase, that made me gain less than 0.001 second. It's probably possible to gain more, but the gain will still be too ridiculous to effectively worse the pain.
Well, in fact it's not so relative as example. If it's all you can gain with one of the smallest script language, what is the gain with a correct code in C, if you don't go to "dirty" optimization ? A nano second ? There's code where it still make a difference, but not this much.
"Make it fast" nowadays mostly imply a change in the language you use, while at first it was more intended as "use the most speed efficient algorithm". But unless you're using a script language and totally messed your algorithm, nowadays changing the algorithm imply a totally insignificant gain in terms of speed.

And it's the same for Martin's theorizing. I tend to agree with him most of the time, but I know that the junior at works look them strangely. We come from another age, and have a totally different conception of "computer science" than them.


Wow, I knew Perl is known for being a bit "dirty", but your code example is next level scary.
I guess that it's not the time to tell you that I have a whole module where the code for the object is wrote (on the main scope, not its own) at execution time and partly based on variables, for it to adapt to the current situation :D
Yes, Perl is "dirty". To my knowledge it's the only language for which it's own author confess that he absolutely don't know what it can do. Not in the way that he don't know its capabilities, but because he don't know its limits.


I did some AWK programming at school 20 years ago, so maybe I know a little Perl already :)
In fact, we all know a little bit of Perl. While C is the father of most modern language, in a way Perl is the big brother, the one that experienced everything before anyone else. Not all what Perl did have been implemented, but moderns languages still took what was the most interesting.


I was thinking more along the lines of describing the state of the whole system.
Then yes, Perl testing is really near to tooling, especially with the pretty explicit name of the test functions. And since there's (globally speaking) already a test for anything, you aren't limited to basic states. You'll have a bunch of is and other ok tests for the intermediate process, and end your test suit with an is_valid_json that will tell you that in the end you effectively product, well a valid JSON structure.
What lead you to possibly have a test suit that simulate the whole process of your software. A process that will be validated at each step by validating its actual state. Something that can look like :
Code:
$inputBuffer = "some test string";

lives_ok( acquireData(), "test name." );
is( $intermediateBuffer, "some test string", "test name." );

lives_ok( processData(), "test name." );
is( $intermediateBuffer, "now processed string", "test name." );

stdout_is( sendData(), "final result", "test name." );
What obviously help you to keep the whole picture in mind. And if you've more than one state to validate, just group the tests :
Code:
test "Data acquisition", sub{
    lives_ok( acquireData(), "test name." );
    is( $intermediateBuffer, "some test string", "Data received." );
    ok( $waitingForData, "Back to waiting state." );
};
If all the tests in the group pass, you'll have only one line in the test summary, telling you that the group past. Else, you'll know what test have failed in the group.

And the fact that you haven't to deal with the result itself also help. You don't have to write the "pass"/"fail" output, what mean that, in a way, you don't even have to know the success condition. Whatever if you know or not what is a valid JSON, it's the task of is_valid_json to know that, and it will tell you if you failed. You can then focus on what matter, "is this correct, whatever 'correct' mean".


It's a bit scary that Perl is such a high level language that it can do all that, but has no guards to prohibit misuse. I kind of get why Python is more popular among coding beginners then Perl.
Personally I see them at the two opposite limits of coding.
With Perl you're free to do whatever you want, and to do it in whatever way you want. But it will be "dirty" and you probably should never look at what you wrote two years ago, even you will be scared. And with Python you have really few freedom, but every time it will be the cleanest code you've ever wrote ; well, unless you're like me, corrupted by Perl.
What fallback to the "too old to theorize" part above. They are two radically different approach that correspond to two different epoch. Not in regard of their creation date, that aren't this different (87 for Perl, 91 for Python), but in regard of their popularity. Perl was almost instantly popular, while it took almost 20 years for Python to effectively rise.

Perl was popular in the 90's and early 00's, that were a time a pure experiments. And what's better for experimentation, than a language that have near to no limits, while still being robust and (relatively speaking) fast ? It was also the start of Internet, with mostly plain text protocols ; Perl being initially made for text processing, it's perfect ! And like coding, and more globally computer use/administration, was a question of passion before being a job, the "dirty" side of Perl wasn't really a problem.
It was in fact more an advantage. When you wanted to know if "it's possible to do this", you knew that Perl would not be the reason why it's impossible. Of course, you could have use C, it also wouldn't have been the reason why it's impossible, but you would have needed at least 10 more times to do it. And, while passing one hour to see if a stupid idea that crossed your mind is possible isn't this unreasonable, passing a full day for this is less tempting. Therefore, you tried with Perl, and once you knew that it was possible, switch to C in order to effectively do it, and do it correctly this time.
But nowadays, experimentation are over. Internet is stable and robust, new protocols send and receive binary data, and computers are everywhere. What is needed is a structured foolproof language, because half (if not more) of the peoples who'll have to use it aren't following their passion, they choose a job. And their job will potentially impact the life of millions peoples, so it need to be done in the cleanest possible way. And Python is teaching that, what will be seen when they'll use compiled languages.


Do I understand correctly that the power of Perl is in it's ability to treat all data as text and run Regular Expressions on it? That combined with that everything is open/global/not restricted so you can inspect all parts of the system?
Globally yes. As I said, it's initially a text processing language, and since text is just a special kind of binary value, it can process any kind of data. This more or less easily depending of what you want to do, but RegEx can test none character Bytes, so... Add its flexibility, the fact that Unix-like systems almost always have a way to access the information in plain text, while most protocols used in networking are at least partly in plain text, and... well, what are the effective limits except the speed ?
The early monitoring systems were in Perl, and still nowadays some have at least a way to accept Perl's extensions. The early where in Perl, even for binary processing. And so one.


I know you are a Python expert too and Perl is falling by the wayside, so would this be possible in Python? It's of similar high level, but is seems more guarded/closed off.
Python expert, it's quick to say. I know many things, but I'm far to be an expert. And I know so much mostly because, coming from Perl, I don't accept when I have to face a "it's not possible to do this" ; it totally hurt my conception of coding language ;)

As for the possibility to use Python for a Perl-like test suit, theoretically yes, but in practice not really. The problem isn't in the feasibility, technically it would be easier and probably more robust to do it with Python ; I even have a small Test::More port for Ren'py ; that I have to totally rewrite, alas. No, the real problem is that Perl have 3 decades of modules solely dedicated to testing, while Python have, to my knowledge, a single test library that, globally, limits to is and ok assertions. Therefore, to reach the level of simplicity and flexibility you can reach with Perl, you've first to write your own libraries.
But I don't think that one day the number of test library will increase. The lack of "deep" test libraries for every language except Perl already show that tests aren't really used. The Martin's note you linked previously regarding TDD also imply the same. And now have to be added the " " approach of Python, that tend to relegate test to an old age ; if things goes west, there will be an exception, I just need to catch it and react. And if my code don't produce the expected result, I'll saw it. Therefore why should I bother to test ?
If you look at Ren'py, so a game engine ported to 5 OSes, it have only 6 series of tests... Less than 40 KB of codes, including the comments ; for comparison, the part processing the audio need more than 50KB. And all are basic and totally coupled to the effective task they tests. Is something working on Windows and not on Linux ? The only way to know it is to wait for the "hey, it don't works" from the users. The version 7.4.2 was released solely because the 7.4.1 had a regression.

Nobody test nowadays, so nobody will write test libraries. And it's a vicious circle, because the reason why nobody test is mostly due to the fact that there's a real lack of test libraries.
Going back to Ren'py, it have a tons of functions dedicated to the display. Relatively speaking it's easy to test if the positioning is correct. Take a screenshot, and compare it to a screen that you build yourself, and where each elements are perfectly positioned. Ren'py can take screenshot itself, so you can have an automated test for this, to ensure that the position are correct whatever the OS. But you've to write the code that will compare the two images, including a 1 pixel variation to take count of a possible the approximation factor. Therefore, you don't test.
But if this code was already available you would just have to write something like:
Code:
   [position this here, that there, ...]
   saveActualScreen( "actual image.png" )
   compareImage( "actual image.png", "reference position.png", approximation=1 )
and so you would probably do it. This simply because it would need you less than 10 minutes, against probably more than a whole week if you had to write the code for the compareImage part and test it seriously to ensure that it don't return wrong result.


Well .Net has integrated witch are massive blocks in between your functions. [...]
It's really a too long time that I've dropped compiled languages, it's start to really show. I still use C/C++ at works, but mostly as secondary coder, I'm more a project manager and in charge of the tests, so I don't really tried to update my knowledge.


Again, a very powerful thing for Perl, but you have to have the knowledge to use it and the wisdom not to overuse it.
Well, it's Perl, the "knowledge versus wisdom" is implicit ;)



Edit: Oops, had closed a quote bloc that was in fact a code one.
 
Last edited:
  • Like
Reactions: Marzepain

Diconica

Well-Known Member
Apr 25, 2020
1,100
1,150
I totally agree that setting the flags for optimization on the compiler is a more efficient use of work time then going by hand directly. The point what I find strange is that you adamantly go for no wasted cycles in the application, while you do acknowledge that there is value in work time. Thus being efficient with work time is important to you, but there can be no tradeoff when it comes to application performance.
You said that you have worked on embedded systems, OSes, compilers, encryption, graphics, database systems. I know that those are all areas where performance counts, so it's logic that you value performance in the way you do.
Jet you have also worked with ASM, C ,C++, Basic, pascal, Python, PHP, JS, Java, C# and a number of other languages. Then you must have seen that specific languages are better suited for specific tasks, like PHP for building a website.
Also languages are better suited for certain kinds of developers, like Python is much more suited for beginners then say C/C++.
Languages and Compilers are just applications and they make tradeoffs in their design that are bad for performance, but better for ergonomics of the programmer. Optimizing for performance could defeat why you are using the language in the first place.
A famous example that stuck with me is, being generally slow, but everything being an object, you could attach code to the for loop object instead of in the loop code to make it iterate faster. To me that optimization defeated the purpose of Smalltalk being a simple to learn, simple to read language.
On the other hand there is code, where veteran Pythonistas preach a certain writing style for Python in line with the language ergonomics. They argue that for performance another language should be chosen.
Shouldn't you have a little compassion with yourself and your colleges and make some room for readability? Or is that is something comments and documentation is for?


For completeness about I also found a and an with Lex Friedman where he restates his argument of abstraction layers. There is also David Patterson's about Moore's law that shines a light on the situation.
The reason why I brought it up is that for the abstraction argument it matters little if it's hardware or software. His InfiniBand example (12:20) at his presentation is about optimization that crosses the OSI boundaries that didn't pan out and the later technology that did respect boundaries was a success. He is an architect of chips not of software so it may not apply, but he is successful respecting those layer boundaries.

There is also the other extreme, that if performance is all you care about, why not use an AI. Those are black boxes that can be fast. You get somewhat of a guarantee that it works correct, by training the AI. I wouldn't trust that kind or code to run a relational database, but it might be very fast.

By the way, I'm not trying to convince you that performance should be anything else as your top priority. You seem very successful with that belief. A F1 driver is that beliefs that all driving should be fast can be very successful. Though it may give some problems on normal roads.
While I do use PHP often I find myself making use of calls to C built systems that run outside of it. There is a limit to what PHP or any language does. As I said before when it comes to programming and styles best tool for the job. That also goes for languages.

Just because code is minimized doesn't make it unreadable. Minimizing code doesn't mean doing stuff like using something like short names for anything. In fact I generally try to use reasonably descriptive names. With auto complete on IDEs and other tools it only makes sense to do so. C and C++ when used correctly is no less readable than python or java.

Take my original post the issue came down to the fact I could not readily tell that the button clicks was a hashtable I was using.
I guess I could start putting something like ht_ infront of those types and vec_ in front of vector based system so I can recognize easily they are a hashtable or vector data type without tracking it back to the library or source file.
But it serves just as good may better to simply put a comment as I did after the issue when I initialized it noting it is a hashtable and the file it comes from.

Performance isn't my top priority btw. My first priority is making sure the program does the task and eliminate bugs. If it has bugs then in my book that first goal isn't completed. While there is some tuning I can do at that stage the bigger performance tune comes after I have the overall system is working correctly.
 
Last edited:
  • Like
Reactions: Marzepain

Marzepain

Newbie
May 4, 2019
61
49
Hmm... I'm tempted to answer that it totally depend who you are in regard of this code.
For the coder, no. Making the code more robust tend to make it more efficient, because at this level efficiency is both a matter of speed, easy interfacing, and being bug free. Things more easily achievable if you follow the single-responsibility principle.
For your boss, yes, it's less efficient. He would prefer a more generic code, that will be put on a "hey, those are codes that we can reuse" pile and make you works faster next time. For him, efficiency is in the reduction of the delays and costs.
And finally, from the point of view of the project manager, the answer is "both", because for him efficiency regard both the code itself and the time needed to write it. His preference for one or the other will depend if he have been promoted project manager after years of coding, or if he have a beautiful "certified project manager" diploma.
I know the "The project managers preference for efficiency of the code and the time to write it depend on if he has been promoted project manager after years of coding, or if he have a beautiful Certified Project Manager diploma." hides a world of hurt. Although I must add that dealing with a PMI or PRINCE2 certified project lead is better then having to deal with those that have no budget or decision power. Just a pretty face the bully boss got from a café. In part this is because every business is an IT business these days. I probably made poor choices in companies to work for.

But more globally, I think that anyone with 20 or more years of experience is all too old to theorize about coding. Not because age by itself is a problem, but because the world changed too much during those two decades.
I think that those with enough experience know how to value the different aspects of the craft. It's just that different people place different accents on certain aspects. You are right that the world changed a lot. The mathematical things still hold true, things about quality are still mostly true, but the economics totally changed.

Take me by example, I started with the early 80's demo scene. Language, ASM or nothing, you need to have the smallest possible size and the fastest possible code. I learned to count the ticks, sometimes passing hours to optimize the code in order to gain one or two. This was mandatory at this time where CPU started at 2.5 MHz, and is ridiculous now that they tend to start at 2.5 GHz and are multi-core. It's not totally true, but nowadays, between its multi-core structure and frequency, the worse computer is at least 2000 times faster. And the same can be said for the RAM, that was below the MB at this time and now starts at 4GB.
I must say, you are a better programmer then me. That's really hard core and I admire that. ASM is like seeing the green letters of the Matrix. I can view it, but I'm no Neo.

The level of optimization we had to do at this time, even for basic software, totally lost its reason to exist. This with, obviously, the exception of critical software or critical part of the software ; a realtime 3D engine still need to have the 3D part overly optimized, but it's almost if you can't just use a script language for everything else. And it's what is sometimes used, an engine like Gamebryo, that wasn't the best one but neither the worst, take all its instruction from a dedicated interpreted script language, and still play smoothly games like Fallout New Vegas.
It's amazing. And now the switch is being made from scripting to visual languages. That something like Blueprints for Unreal can function, enabling designers to program takes it to a whole new level. Although I hear, that the designers who do that need to know a little thing about programming.

But it's not the only major change. When I started, home computing was at its early age. The father used the computer to plan his budget, the mother tried to see if she could use it to order her cooking recipes, and the children used it to play. Nowadays we have in our pocket a small device that permit us to do so way more ; the mother don't anymore care to order her cooking recipes, in less than one minute she have access to ten variations of the same recipe.
Passing at least a full year working on a software was the norm, now unless you're working on a big office automation system, on an OS or on a AAA game, if you need more that few months, you'll miss your market.
Painfully so. The market dictates the development. I wish I could find a place to hide out, but the few places there are, are riddled with politics and games to maintain face and position. Just like with reality TV Big Brother like game shows, the most capable, the uncapable and the weird get voted out first, leaving the average and compliant. I can infiltrate like a spy for a while, but sooner or later I get found out. Startups are a better match, but I have a handicap by automatically focusing on quality, while a startup is really about put out crap as soon as possible, hoping something will stick.
Maybe I should heed the old adage "Those who can, do, those who can't, teach." and become a teacher. Academia such a place full of politics and games, but maybe it would work. There is an alternative slant on the teach thing, by becoming a Podcast/U-Tube interviewer/teacher, but I think Alex Friedman has the market cornered for that. Maybe a combo with game development, but I don't think there is enough money in it to survive.

"Make it fast" nowadays mostly imply a change in the language you use, while at first it was more intended as "use the most speed efficient algorithm". But unless you're using a script language and totally messed your algorithm, nowadays changing the algorithm imply a totally insignificant gain in terms of speed.
Interestingly put. I need to remember this.

And it's the same for Martin's theorizing. I tend to agree with him most of the time, but I know that the junior at works look them strangely. We come from another age, and have a totally different conception of "computer science" than them.
Well juniors... what can you do? Those "Google" programmers find stuff on the net, throw it in, get the praise from their manager, then leave the mess to their colleges, who then get reprimanded for taking so long dealing with the mess. I left on a number of occasions, because of this, but now can't get work at a decent firm because I didn't follow the standard for things like unit testing religiously and not at a bad firm for being too slow. Maybe it's just in my mind, but I'm hitting 45 and am actually unemployable, while there is a shortage of 50K on 150K IT people in NL. Bizarrely, every headhunter thinks I'm a unicorn with a pot of gold attached to it, while every hiring manager has very different ideas.

I guess that it's not the time to tell you that I have a whole module where the code for the object is wrote (on the main scope, not its own) at execution time and partly based on variables, for it to adapt to the current situation :D
Yes, Perl is "dirty". To my knowledge it's the only language for which it's own author confess that he absolutely don't know what it can do. Not in the way that he don't know its capabilities, but because he don't know its limits.
That's a cool statement. Have to admire that one.


In fact, we all know a little bit of Perl. While C is the father of most modern language, in a way Perl is the big brother, the one that experienced everything before anyone else. Not all what Perl did have been implemented, but moderns languages still took what was the most interesting.

Then yes, Perl testing is really near to tooling, especially with the pretty explicit name of the test functions. And since there's (globally speaking) already a test for anything, you aren't limited to basic states. You'll have a bunch of is and other ok tests for the intermediate process, and end your test suit with an is_valid_json that will tell you that in the end you effectively product, well a valid JSON structure.
What lead you to possibly have a test suit that simulate the whole process of your software. A process that will be validated at each step by validating its actual state. Something that can look like :
Code:
$inputBuffer = "some test string";

lives_ok( acquireData(), "test name." );
is( $intermediateBuffer, "some test string", "test name." );

lives_ok( processData(), "test name." );
is( $intermediateBuffer, "now processed string", "test name." );

stdout_is( sendData(), "final result", "test name." );
What obviously help you to keep the whole picture in mind. And if you've more than one state to validate, just group the tests :
Code:
test "Data acquisition", sub{
    lives_ok( acquireData(), "test name." );
    is( $intermediateBuffer, "some test string", "Data received." );
    ok( $waitingForData, "Back to waiting state." );
};
If all the tests in the group pass, you'll have only one line in the test summary, telling you that the group past. Else, you'll know what test have failed in the group.

And the fact that you haven't to deal with the result itself also help. You don't have to write the "pass"/"fail" output, what mean that, in a way, you don't even have to know the success condition. Whatever if you know or not what is a valid JSON, it's the task of is_valid_json to know that, and it will tell you if you failed. You can then focus on what matter, "is this correct, whatever 'correct' mean".
I will look into Perl. At the very least it gives me ideas how to do things.

Personally I see them at the two opposite limits of coding.
With Perl you're free to do whatever you want, and to do it in whatever way you want. But it will be "dirty" and you probably should never look at what you wrote two years ago, even you will be scared. And with Python you have really few freedom, but every time it will be the cleanest code you've ever wrote ; well, unless you're like me, corrupted by Perl.
What fallback to the "too old to theorize" part above. They are two radically different approach that correspond to two different epoch. Not in regard of their creation date, that aren't this different (87 for Perl, 91 for Python), but in regard of their popularity. Perl was almost instantly popular, while it took almost 20 years for Python to effectively rise.

Perl was popular in the 90's and early 00's, that were a time a pure experiments. And what's better for experimentation, than a language that have near to no limits, while still being robust and (relatively speaking) fast ? It was also the start of Internet, with mostly plain text protocols ; Perl being initially made for text processing, it's perfect ! And like coding, and more globally computer use/administration, was a question of passion before being a job, the "dirty" side of Perl wasn't really a problem.
It was in fact more an advantage. When you wanted to know if "it's possible to do this", you knew that Perl would not be the reason why it's impossible. Of course, you could have use C, it also wouldn't have been the reason why it's impossible, but you would have needed at least 10 more times to do it. And, while passing one hour to see if a stupid idea that crossed your mind is possible isn't this unreasonable, passing a full day for this is less tempting. Therefore, you tried with Perl, and once you knew that it was possible, switch to C in order to effectively do it, and do it correctly this time.
But nowadays, experimentation are over. Internet is stable and robust, new protocols send and receive binary data, and computers are everywhere. What is needed is a structured foolproof language, because half (if not more) of the peoples who'll have to use it aren't following their passion, they choose a job. And their job will potentially impact the life of millions peoples, so it need to be done in the cleanest possible way. And Python is teaching that, what will be seen when they'll use compiled languages.

Globally yes. As I said, it's initially a text processing language, and since text is just a special kind of binary value, it can process any kind of data. This more or less easily depending of what you want to do, but RegEx can test none character Bytes, so... Add its flexibility, the fact that Unix-like systems almost always have a way to access the information in plain text, while most protocols used in networking are at least partly in plain text, and... well, what are the effective limits except the speed ?
The early monitoring systems were in Perl, and still nowadays some have at least a way to accept Perl's extensions. The early where in Perl, even for binary processing. And so one.
That really takes me back. CGI makes me think of Cyberspace and Cyberpunk, of the limitless possibilities the internet could have. The weird, the hippies and nut-jobs that pioneered the net. When a burned out stock trader from NY wanted to retire with a bookstore in SanFran, being good to people and did the internet thing because it was cool. Bezos and Amazon are a long way from where they started and so are all the rest.

Python expert, it's quick to say. I know many things, but I'm far to be an expert. And I know so much mostly because, coming from Perl, I don't accept when I have to face a "it's not possible to do this" ; it totally hurt my conception of coding language ;)
Much appreciated ;)

As for the possibility to use Python for a Perl-like test suit, theoretically yes, but in practice not really. The problem isn't in the feasibility, technically it would be easier and probably more robust to do it with Python ; I even have a small Test::More port for Ren'py ; that I have to totally rewrite, alas. No, the real problem is that Perl have 3 decades of modules solely dedicated to testing, while Python have, to my knowledge, a single test library that, globally, limits to is and ok assertions. Therefore, to reach the level of simplicity and flexibility you can reach with Perl, you've first to write your own libraries.
But I don't think that one day the number of test library will increase. The lack of "deep" test libraries for every language except Perl already show that tests aren't really used. The Martin's note you linked previously regarding TDD also imply the same. And now have to be added the " " approach of Python, that tend to relegate test to an old age ; if things goes west, there will be an exception, I just need to catch it and react. And if my code don't produce the expected result, I'll saw it. Therefore why should I bother to test ?
Checking first is faster then throwing exceptions, because the exceptions stack is not optimized for speed, at least not in .Net. Python doesn't care much about speed, so I get the approach.

If you look at Ren'py, so a game engine ported to 5 OSes, it have only 6 series of tests... Less than 40 KB of codes, including the comments ; for comparison, the part processing the audio need more than 50KB. And all are basic and totally coupled to the effective task they tests. Is something working on Windows and not on Linux ? The only way to know it is to wait for the "hey, it don't works" from the users. The version 7.4.2 was released solely because the 7.4.1 had a regression.
Well Ren'Py is great, but it's not like lives depend on it. It does sound sloppy. Strange for such a mature project to be so wild.

Nobody test nowadays, so nobody will write test libraries. And it's a vicious circle, because the reason why nobody test is mostly due to the fact that there's a real lack of test libraries.
Going back to Ren'py, it have a tons of functions dedicated to the display. Relatively speaking it's easy to test if the positioning is correct. Take a screenshot, and compare it to a screen that you build yourself, and where each elements are perfectly positioned. Ren'py can take screenshot itself, so you can have an automated test for this, to ensure that the position are correct whatever the OS. But you've to write the code that will compare the two images, including a 1 pixel variation to take count of a possible the approximation factor. Therefore, you don't test.
But if this code was already available you would just have to write something like:
Code:
   [position this here, that there, ...]
   saveActualScreen( "actual image.png" )
   compareImage( "actual image.png", "reference position.png", approximation=1 )
and so you would probably do it. This simply because it would need you less than 10 minutes, against probably more than a whole week if you had to write the code for the compareImage part and test it seriously to ensure that it don't return wrong result.
There is hope that things get more adopted when they get simpler. I do sometimes feel that amateur game makers rather have problems then avoid them. There is something heroic when people chase a bug for days, while people who carefully program to avoid mistakes are seen as boring. Although Ren'py does some testing so they at least they value testing. I hope things will gets better.
 
  • Like
Reactions: anne O'nymous

Marzepain

Newbie
May 4, 2019
61
49
Performance isn't my top priority btw. My first priority is making sure the program does the task and eliminate bugs. If it has bugs then in my book that first goal isn't completed. While there is some tuning I can do at that stage the bigger performance tune comes after I have the overall system is working correctly.
I think those with experience value all parts of the craft, but place different accents on parts of it. It depends on the circumstances. I admire your focus on performance, because many hobbyist programmer go's into professional programming with the same mentality, but they are made to see things differently. You managed to maintain it and that shows me it's possible.
For me my accent is not on performance, but on architecture, design and ergonomics. I hold true the "Man is the measure of all things" You are probably a lot better programmer then me.

What is funny is that you mentioned . The wiki has been updated, but it used to state that it was introduced at GDC in 2006, but that there where ideas floating around before that. Now it states that it hails back to much older software architecture principles. You may think it's hubris, but I came up with something very similar to ECS in 1998 when I applied every Design Pattern I could find to game engine design. In 2000 I did my final internship for my bachelors degree at a game startup trying to implement an editor with my version of ECS. The startup failed, although the went another way finally and I failed personally, because of performance. I wanted to implement it with maximum speed and tried C/C++ but using the same Macro system the STL implementation is known for. I managed to get compiler errors on empty lines among other things. I lot of other things went wrong, also on a personal level.
After months I packed myself together and managed to write my bachelors degree paper in 2001, containing ECS. It never reached the trades and I doubt me alma mater has managed to save a copy of it.
It even contained a paragraph on the implementation of the separate system arrays/lists although somebody on gamedev.net had suggested that he was working on that. I think that person was the one who did the presentation at GDC. He deserves the credit as I was disillusioned with game development and went a different way. I may even have held him back for years.
The thing is ECS didn't make any inroads in mainstream game development for more then a decade. The game industry's focus on performance is now getting a little less so a thing like Unity ECS has a chance. Although I have already heard developers criticizing it, mainly because the don't want to change the way they do things.
As a side note E of Entity was inspired by the of ERD . I was influenced by invented at the same time at the Uni the startup was associated with. Later Eric Evans created that in my mind takes these concerns even further and now is back in fashion thanks to . The .Net framework fixed a lot that was wrong by, not defining a like Java did, but taking the many compilers create and add management of code to it with at the time. When MS introduced the deal was done for me. While XNA was not the components style I envisioned, but more a managed version of DirectX, it made stuff so much simpler that I felt that it was enough. Stupid MS killed it in 2013 leaving it to and to carry on. These days Unity and Unreal have taken lots of development out of the equation. Visual programming languages like Blueprints are taking over from scripting languages. This relegates the movement in game scripting from Lua and Python to C# .Net at the same level as Java bytecode and .Net IL now surpassed by Javscript frameworks and NodeJS. A curious fact of history, but no longer relevant anymore.

The gist of my story is this, I'm a Software Architect. I combine theory and practice into some new structure. You may think this is all a big fat lie, or that I'm one of those people who draws clouds and make live difficult for real programmers. That's up to you. I recently tried to create an language based on , started a movement to have a layer of indirection between the assets and code by an open datafile, so broken assets can be fixed in amateur games and known to some here, I created the article "Game Assets folder structuring approaches and dimension. Introducing the Theatrical approach." Probably nothing will come of those things. Maybe I'm just crazy. I just happen to like structuring things.

Unfortunately the world does not need Software Architects anymore. I try to sell myself as a Software Architecture coach these days. Most of the flexibility Design Patter's offered has first moved into frameworks and tools and now moved into the languages used and the processes around software development.

The thing that inspires me about your story with performance that there is a niche that it matters. There may be a niche for architecture too. I miss the success of somebody like Jim Keller, so maybe not for me. As they say in Hollywood, "When your hot, you're hot, when not, then not.", but I can try.
 
Last edited: