I am not ‘On the spectrum’, I just expect consistency!

My first calculator was a Casio FX81. At the time for someone in their teens in the late 80’s in NZ, it was the calculator “to have”. With hindsight it proved robust and suited to the tasks required for 6th and 7th form maths and physics, although this wasn’t the then consideration. I have found a stock photo of the FX81 below (below). Mine was black. 

To perform calculations of the type shown, my recollection is that you would have to review the fraction expression, and, in order, perform the addition, store the result in memory, do something else, retrieve it and then some more. I think the keystrokes would be 3 + 2 = “M In” 4 / MR = * 5  =. I may have got it wrong but it was something like that anyway. Looking at the stock photo, I can see too some keys with brackets labelled above them,  so perhaps you could perform the addition in isolation with brackets and preserve BODMAS execution order that way. Again, I just cannot remember. For an 18yo know-it-all (I did allude to being a teenager above!), I had heard of RPN, perhaps even been exposed to it at high school, and perhaps heard of RPN calculators too. More memory loss … again I cannot remember. If I had felt inclined to perform some trigonometry using the FX81, say to calculate the Sine of 90o for example, the calculator keystrokes would be 9 0 Sin.

In 1982 I first attended the University of Otago NZ. At the time, the then Computing Services Centre had some sort of mass student discount/purchase scheme for various things including Hewlett Packard calculators. The calculator that piqued my interest was the HP41CX. I bought one, and my recollection is that it was a hefty price too ….. a few hundred NZ$. My HP41CX  has survived three degrees, some hard times in chemistry labs, travel to another continent, and yet it remains fully functional around 40 years later and it is regularly exercised. I even have an HP41CX emulator on my iPhone and MacBook. I include inline below a photograph of my HP41CX.

Click on the image to see a screenshot recording of the keystrokes required to evaluate the result from the fraction/expression above.  The keystrokes are 5 “enter” 4 “enter” 3 “enter” 2 “enter” + / *.  As above for the FX81, to calculate the Sine of 90, the keystrokes are also 9 0 Sin.

Odd perhaps that the calculation of the Sine of 90 are the same keystrokes on both the HP41CX and the FX81, whereas everything else is so different given the HP41CX is an RPN calculator whereas the FX81 is not?

Odd too that the casual reader of this blog would think that RPN calculations are so hard to comprehend, yet that same casual reader accepts that clicking the Sin calculator key after typing 9 0 is fine, but in the calculation of 3 + 2 for example, that 3 “enter” 2 + is just silly. Well to answer myself, this depends on your mindset and way of thinking. I would argue, that the RPN approach, 3 “enter” 2 +, and 90 Sin, are consistent (the + and the Sin, the operators, are the terminal keystrokes).

I love consistency. To reword that sentence, I loathe inconsistency.

As a software developer, I see inconsistency everywhere; from the use of different coding styles in the same source code file, to mixing and (mis)matching of software design patterns, to API methods that vary in the order of arguments, for no apparent reason other than the developer just introduced an inconsistency because they are a sloppy developer or followed the bad coding style or pattern from a prior developer. I suspect most of the time these developers do just not know any better however.

The inconsistency I encounter when using the FX81 (and similar) calculators is so in-your-face, it infuriates me; placing the mathematical operator last (eg. keystrokes 9 0 Sin to calculate the Sine of 90o in both the HP41CX and the FX81), but sometimes not (eg. the FX81 3 + 2 = and not 3 “enter” 2 + as in the HP41CX). RPN as implemented by Hewlett Packard in the HP41CX is consistent.

Sometimes inconsistency exists, and I do not recognise it, at least I don’t recognise it, at first but once I recognise it or it has been pointed out to me, it infuriates me. When the inconsistency is argument order in a public method in C or C# for example, you just have to sigh, shake your head, or stomp your feet. Again, the inconsistency is so obvious, annoying. It is often a bug in waiting too, especially when the arguments are the same data type.

Seasoned developers that write unit tests for their code (don’t get me started on the cowboys masquerading as professionals … the type that write code and don’t write unit tests and justify ad nauseam why they do not write unit tests …. honestly, this argument doesn’t stand up EVER; if you cannot be bothered testing your code, don’t bother writing it!) will be familiar with various assertion methods that put the EXPECTED result prior to the CALCULATED/ACTUAL result.

namespace uk.co.strychnine.tests
{
    [TestFixture]
    public class CalculatorTests
    {
        private Calculator calculator;

        [SetUp]
        public void Setup() 
        { 
            calculator = new Calculator(); 
        }

        [TearDown]
        public void TearDown()
        {
            calculator = null;
        }

        [Test]
        public void TestAddition()
        {
            // Arrange

            // Act
            var result = calculator.Add(3, 2);

            // Assert
            Assert.Multiple(() =>
            {
                Assert.AreEqual(5, result.Sum);
                Assert.IsFalse(result.DidError);
            });
        }
    }
}

A C# snippet showing this type/a typical unit test is shown above (using NUnit, but this convention is also adopted across most other mainstream testing frameworks) that might be coded up for addition functionality, the denominator in the expression above. Note the expected result from the sum of 3 and 2 is 5, and it appears first in Assert.AreEqual. There is a second assertion above shown too, to check for errors in the calculation, perhaps a denominator of zero, overflow, NaN, and more.

I have used testing frameworks other than NUnit. They’re all much of a muchness to be honest; MSTest test classes are annotated with TestClass attribute whereas NUnit uses TestFixture, but the test functionality for testing equality, throwing exceptions, collection support and the like, various other test attributes, are all very similar. Who knows why the expected result always comes first, but thank goodness there is consistency between MSTest and NUnit for example, and the others. The consistency of having the expected result come first, and the actual result second, makes the developers life easy; inconsistencies and idiosyncrasies between testing frameworks on a matter of triviality would just be annoying after all so consistency is to be encouraged.

I show below the contrived like-for-like unit test using Apple XCTest and Swift for a similar calculator class demo’ing addition.

import XCTest
@testable import Calculator

class CalculatorTests: XCTestCase {

  var calculator: Calculator!

  override func setUp() {
    calculator = Calculator()
  }

  override func tearDown() {
    calculator = nil
  }

  func testAdd() {
    let (sum, didError) = calculator.add(2, 3)
    XCTAssertEqual(sum, 5)
    XCTAssertFalse(didError)
  }
 }

Given this whole blog post is about consistency, is there an obvious inconsistency with Apple XCTest and almost every other mainstream testing framework? Look at the line Assert.AreEqual(5, result.Sum) in NUnit whereas XCTest uses XCTAssertEqual(sum, 5); NUnit and all the others put the EXPECTED test case value first, and the RESULT/ACTUAL second, whereas XCTest puts the RESULT/ACTUAL value first? Why introduce this inconsistency? Why would Apple do this? They do after all think about everything, thoroughly, before doing something, and for their API’s do not seem concerned with backward compatibility (if Apple want to change an API interface to fix something or address an inconsistency, they do, as any Swift or Objective-C developer will tell you).

Another question is why do the vast bulk of testing frameworks place the EXPECTED answer first (this thread on StackOverflow offers some insights). I suspect one answer to this question is that upon adoption of the early testing frameworks, other testing frameworks just followed suite with the convention and consistency and thought no further.

Upon looking at the two NUnit assertions again, specifically these few lines

 Assert.Multiple(() => 
  { 
    Assert.AreEqual(5, result.Sum); 
    Assert.IsFalse(result.DidError); 
  });

it is clear that these two assertions/tests are inconsistent. In the first assertion, the EXPECTED result is 5 and the ACTUAL result is result.Sum, however in the second assertion, Assert.IsFalse, the ACTUAL result is the first (and only argument), not a second argument. This is clearly an exception to the rule, and anyone learning any language, programming or written, exceptions to rules are difficult. The only visual clue that this assertion method, IsFalse(..), takes the result rather than an expected value is the method name. I view this in a similar vein to the exception to the rule  where calculator keystrokes are placed AFTER the value for the calculation of the sine of a number, eg. the key clicks 9, 0, then Sin on the calculator (so the Sine operator is last), for both the FX81 and HP41CX, whereas the operator + is last only on the HP41CX RPN calculator, but not the FX81.

Compare this unit test snippet to the XCTest implementation.

  XCTAssertEqual(sum, 5) 
  XCTAssertFalse(didError)

Here, the first argument passed to the assertion method is always the ACTUAL result, not the expected result, irrespective of the method name. This is internally consistent, irrespective of the test operation. In these examples, THE XCTEST ASSERTION APPLIES TO THE FIRST ARGUMENT IN THE TEST METHOD, ALWAYS. Now isn’t this just nice?  I wonder whether the Apple software engineers and architects appreciated the inconsistency in existing test frameworks when designing and implementing XCTest, and it contributed to them deviating from convention.  I suspect they did. Inconsistency, especially internal inconsistency, is avoidable, and all you have to do is to think about things a tiny bit to correct the problem. Without thinking, other organisations and people just follow the herd reproducing the same problem over and over.

Lastly how many of you had not noticed that when you used a conventional calculator, you put the operator both first and second (ie. + and Sin), inconsistently, and you hadn’t even noticed? If you hadn’t noticed, I suspect you are not on the spectrum! Further if you hadn’t noticed, and now this has been drawn to your attention, and it aggravates you, join the club.

— Published by Mike, 12:07:29 31 Dez 2020 (GMT)

Leave a Reply