Skalli
Member
- Nov 1, 2017
- 143
- 169
- 174
StationmasterDev making an informed decision like that is totally fine. I doubt the performance impact will be noticeable though, sure decimal is slow, but it's not like vector math where you have millions of operations per frame (I assume). In out software there were extremely many decimal calculations but it was never the performance double neck.
I worked with payment systems for stores before, so I have some background in that field. Many stores actually just care about the precision up to 4 decimal digits. For the Decimal type, the precision can actually be set, so if you just care about 2 digits set it to too.
The main reason to use it is to avoid common rounding errors, mostly for comparisons. 2!=2.9999999999998 stuff where a comparison suddenly fails despite both values being "2". It's easy to forget to use the proper operators or delta handling for comparisons. Edge cases always suck. It also makes output formatting easier.
That being said, it's still fine to use double if it matches the use case. I'd avoid float though. But as long as it works and the drawbacks are manageable it's fine. It's more for future projects, use the right datatype and only optimize when necessary.
AccountNo23_III unless there are noticeable performance issues it's best to use the right datatype for the right job. My comment wasn't so much in regard to the data type itself, but more towards the tone while providing inaccurate feedback. But I appreciate the civil tone in the last response, so no hard feelings. Using integer is not wrong (the long int), but when dealing with monetary values Decimal should be the way to go unless there is a showstopper for it.
I worked with payment systems for stores before, so I have some background in that field. Many stores actually just care about the precision up to 4 decimal digits. For the Decimal type, the precision can actually be set, so if you just care about 2 digits set it to too.
The main reason to use it is to avoid common rounding errors, mostly for comparisons. 2!=2.9999999999998 stuff where a comparison suddenly fails despite both values being "2". It's easy to forget to use the proper operators or delta handling for comparisons. Edge cases always suck. It also makes output formatting easier.
That being said, it's still fine to use double if it matches the use case. I'd avoid float though. But as long as it works and the drawbacks are manageable it's fine. It's more for future projects, use the right datatype and only optimize when necessary.
AccountNo23_III unless there are noticeable performance issues it's best to use the right datatype for the right job. My comment wasn't so much in regard to the data type itself, but more towards the tone while providing inaccurate feedback. But I appreciate the civil tone in the last response, so no hard feelings. Using integer is not wrong (the long int), but when dealing with monetary values Decimal should be the way to go unless there is a showstopper for it.