[fix dev diary] Week 2: Foundation of the CLI App

Contents

In my dev diary blog post series, I document the minutiae of what I am doing for my toy project Fix. The diary can also be found in smaller bites on the GitHub pages of the project. Many of the subheadings link to the commits in question.

August 1st

The first TDD session, and seeing how far we have come in passing the first BDD scenarios. Fix shows usage information, no more “hello world!”

add cli application class

A few things have happened in this commit: At first, I added the test/fix_cli directory and app_test.cpp inside it. This already reflects a few decisions: I usually want to have tests and the actual source to be structured similarly, so there will be a directory src/fix_cli.

It will contain a library with the same name that implements the classes concerned with the CLI for Fix. The reason is that classes and code that are in a library can be tested by linking that library into the test executable. Code that is directly compiled into the actual executable is harder to test, if at all.

The fix_cli library will contain only the CLI-related classes. Any class that deals with the actual domain of Fix goes into another library. The CLI classes do not belong in the same library as they are only one of several possible means to access the domain.

After setting up the CMake structure for the app test, I wrote the first test case. You’ll notice that it largely corresponds to the first two scenarios in the feature file I added in the last commit. Except that it also checks that the -h option prints the usage message. Small deviations from the feature definition like this are OK as long as it does not complicate the code too much and there’s a good reason (users will probably expect both --help and -h options to work as everywhere else).

The test wouldn’t compile even after including all needed standard library headers because fix::cli::app did not exist. So, I added an empty class and went through the routine of setting up the CMake structure for the fix_cli library and added it as a dependency to the fix_cli_tests executable.

Compilation failed because there was no known conversion from std::stringstream to app. So add the constructor. I could have been overly dogmatic with the TDD approach and use std::stringstream as the argument because that’s what the test uses. However, I know I want the class to work with std::cout and the std::stringstream is only a tool to capture the output, so I used std::ostream for the constructor parameter instead.

And yes, there are other ways to achieve this without injecting the stream into the class, e.g. swapping the stream buffer like I did here, but the outcome is the same.

Back to making the test green – after adding a constructor (that does nothing at all with its parameter), compilation fails due to the unknown run method. I added one, taking a std::vector<std::string_view> (because I knew that’s what I’d need in the end) and doing nothing with it. That’s the spirit of TDD: do the minimal amount of work to make things compile, i.e. a bunch of empty constructors and functions. The function has a // NOLINT comment for now, because clang-tidy complains about the unnamed argument (it would complain about an unused one if I named it) and about the fact that run could be made static.

Now the test compiles, but it fails miserably (who would’ve thought) because there’s no output. The minimum effort to pass it is to print the usage message into the stream right in the constructor. It looks a bit wrong, we’d expect to have a reference to the stream as member and do the output properly in the run function, maybe after checking the options. But that’s not needed to make the tests pass, so I won’t do it just yet.

integrate the new CLI app in the Fix executable

Now that we have the app class that outputs usage information, it’s time to see how far we are with the BDD tests. The central part of the commit is the integration of the app into the main function. That function is as good as done now, except for a small change that we’ll see shortly.

With the change to the main function, the fmt library is no longer needed. It may come back later, but if it does, I always can look back in the git history to see how the integration is done, should I forget. It is good style to clean up things that are no longer needed.

The other changes you’ll see in this commit are changes regarding behave: I removed an obsolete step definition and fixed some errors in the python code itself. Those errors had shown up earlier already and I should have fixed them right when I wrote them, but I expected the scenarios to fail and therefore did not look closely why they failed.

The last piece of this commit is the tags in the feature file and the script to run the behave tests: I’ll mark any feature or scenario that is not fully implemented with @skip and mark those I am currently working on as @wip (work in progress). Those that are not skipped are implemented and should not fail unless I change something in the functionality they describe. The script runs all tests that are not skipped and all that are work in progress.

The current status of the BDD tests is that “Show usage when called with help option” passes, while “Show usage when called without arguments” fails in the last line – the program always terminates with exit code 0. That will be the next step to fix.

August 2nd

Finalizing the first two BDD scenarios, Fix now returns a proper exit code.

Return exit code from app::run and main

Yesterday’s remaining task was to get the correct exit code from the Fix executable to pass the current WIP scenarios. This is best achieved by having the app::run method return an exit code and forward that as exit code from main.

It starts with the unit test again: app_test now requires that app::run returns the correct exit codes. To fix the compiler error, run first has to return anything comparable to an exit code, and int is the obvious choice as that’s what main returns.

To fix the failing unit tests, we have to check the arguments passed to run – as it happens, we can just check whether there actually is an argument to pass the tests. This looks too simple, even wrong – but the tests are green, so it is sufficient. More tests are coming up in our feature file that should take care of this problem.

To my surprise, returning the exit code from main did not fix the failing scenario: Behave still told me that the exit code was not the same as the one I expected – but it did not tell me which either value was.

This was the reason for the assert_equals function you see in the step definition file. There are several python testing frameworks out there that provide this kind of functionality, but only one I found has them as free function. All of them contain way more functionality than I need, so I opted for writing this function by hand for now.

The culprit turned out to be that while the unit tests pass only the arguments to the CLI app, the main function passed the name of the program itself as well. After fixing that, all WIP scenarios are green.

Refactor: move assert_equals into own python module

Besides a few whitespace fixes and the removal of the @wip and @skip tags from the now-passing scenarios, the commit message already says it all: The assert_equals function does not directly belong to the CLI step definitions, it will most likely be used all over the tests, and there are bound to be more similar functions. Moving it into its own module is a logical step. Let’s move on to the next BDD scenarios.

August 3rd

Before I merge the initial features of the CLI app back to the main branch, I want to improve the tooling a bit.

tooling: rework and apply clang-format configuration

The clang-format configuration I had come up with during the setup did not seem to work very well. CLion would not apply the formatting in the way I thought I had configured, and clang-format 12 which I am using in the docker4c container threw errors while parsing the configuration.

The new configuration seems to do what I’d like, even though I fell victim to a bug in CLion’s documentation for the brace wrapping.

tooling: add unit tests and behave tests to CTest suite

I had a go at experimenting with CTest which I have not actively used so far. The result is a test suite that includes the unit tests as well as the Behave tests, the latter separated into stable and “work in progress”.

I’m quite pleased with how it turned out, as I now can launch all tests from CLion with one click and get to the location of test failures inside the feature file as well.
screenshot of behave and CTest integration in CLion

August 5th

The creation of issues is hard to test without being able to list them, and listing issues can only be done if we can create some. Therefore, in this step, I’ll implement the basics of both features in lockstep.

behave: create and list issues

The five scenarios in this commit describe two new functionalities in Fix – namely the creation of an issue and listing all issues. They do not describe how the CLI for these features has to be implemented.

In fact, from now on the CLI app will serve primarily as a test driver so I can access the Fix domain library from the behave tests. Only when I encounter issues in the CLI app that need to be addressed, I will add tests to drive a fix of those issues.

While the first scenario is one that you’d probably expect for any container or repository (“when it’s fresh, it’s empty”), the others are not too fine-grained. For example, you’ll note that there is no test that describes that after the creation of one issue there should be that one issue in the list.

Those details can be derived from the other scenarios and should emerge from a straight-forward implementation. If they don’t, there will be unit tests for the details, but I want to avoid testing all of the requirements on both test levels.

The last two scenarios deal with the issue ID I explain in the milestone description why I don’t want to use consecutive numbers. However, two issues with the same first words in the title that are otherwise different should have different IDs, while creating two issues with the exact same content should get rejected.

The expected error message “issue already exists” implies that the generated hash should be always the same for issues with identical content, i.e. simply generating 28 random bits won’t do.

August 6th

Today was about the first functional scenario: getting a list of 0 issues.

behave: implement steps for empty list scenario

The “list issues for an empty repository” scenario was still failing due to steps not being implemented. Behave is somewhat helpful in that regard as it emits messages like this one:

You can implement step definitions for undefined steps with these snippets:

@given(u'an empty issue repository')
def step_impl(context):
    raise NotImplementedError(u'STEP: Given an empty issue repository')

I took these hints and implemented the three missing steps. One was the “terminates with exit code OK” step which had a clash with the “terminates with exit code 0” step I had written before. Currently, both steps coexist thanks to pattern matching, but I like the “OK” one more as it describes the semantics, not the technical detail.

The “When we list all issues” step implementation is straight-forward. It calls “fix list” and that way defines the command line argument that the app will have to handle.

The “Given an empty issue repository” step might be a bit surprising: it does exactly nothing. The question might be “then why do unnecessary work and write the sentence in the feature file and the step definition?”

The answer is that “When we list all issues it prints 0 issues” is only true for an empty repository, it’s not a general case. There surely will be scenarios that start with a non-empty repository, and those will have a matching step.

Another answer is one that does not really count: we might have to actually implement this step in the future if Fix can only work with an existing repository. It does not count because that’s in the future – we should never write code because of “might”s and “maybe”s.

That’s the “YAGNI” principle: You Ain’t Gonna Need It. It does not mean we should not keep in mind likely code changes – we should write code that is extensible, but we should not extend it right now unless we need the extension now.

feature: list 0 issues

Now that the steps are defined, the app has to implement the feature. The current failure was that it was complaining that “‘list’ is not a fix command”.

The implementation is short, albeit disappointing: First, write a test that demands the exact output we defined in the scenario. Then, write the least possible lines of code that make the test pass – nothing fancy.

What about refactoring? After all, that’s the next step after making the test pass, right? I’m coming to that in a second. It’s getting late, just let me get sidetracked with a little tooling I wanted to have all week.

tooling: add script for dev-diary posts

After writing the dev diary for almost two weeks now, it got a bit tedious of listing the commits for each day, copying together the links to the individual commits, and so on. As you can see from the diff, the individual posts also had slightly different formats regarding the dates, so I wanted to automate as much as possible.

The result is, that at the end of each day I run the script, and it creates a new markdown file for the dev diary post. All I have to do is write the prose and find a title – the dates are formatted as I want them, the links are there, etc.

During the week I may do some work on the project in the morning, not knowing whether I find time to do more in the evening. That’s OK, I can just run the script in the morning and again in the evening, it will append to an existing file. All I have to do is remove duplicate links.

Call me lazy – in cases like this I am, and it’s a good thing. I even went one step further right now and automated the commit message for the dev diary posts with git aliases:

[alias]
    commit-dev-diary = "!f() { git add docs/_posts/$1* && git commit -m \"dev-diary: add entries for $1\"; }; f"
    ddyesterday = !"git commit-dev-diary $(date -I --date=yesterday)\"
    ddtoday = !"git commit-dev-diary $(date -I)\"

This kind of little improvement is what makes work actually fun for me. Not only did I reduce the time I have to spend with the technical aspect of the dev diary I also learned a thing or two about bash scripting (in this instance, the date command and heredocs). Being a C++ specialist is interesting, but it also takes a good measure of generalist knowledge to get the work done effectively.

Upcoming refactorings

There are a handful of things I left to be refactored due to my short tooling detour:

  • cli::app unit tests. There’s a pattern emerging how the app class is tested: create a string_stream to capture the output. Assemble a vector of arguments, run it, check the output and return code. The string_stream handling and the vector are mostly boilerplate. The most expressive way to write unit tests would be passing a simple string with space-separated arguments to a function and get something back that has the output string and exit code.
  • Argument processing in cli::app. Working with the argument vector and handling the different inputs and commands has become unwieldy even before there is any actual functionality to speak of. There are several command-line argument processing libraries that may help. I have set my eyes on docopt since it looks as if it’s going to help a lot with printing the usage strings etc.
  • The behave step checking for numeric exit codes is redundant. As discussed earlier, I like the semantic exit codes more, so I’ll get rid of the numeric ones.

August 7th

Today’s session was spent with the refactorings that have accumulated over the last two days.

refactor behave: use semantic exit codes

This one was rather a warm-up:
– inline the call to check_exit_code_num in check_exit_code_alpha
– change check_exit_code_num to always fail
– find all failing scenarios and replace 0 with OK, 1 with ERROR
– add ERROR to the translations map in check_exit_code_alpha

Run tests after every one of these steps – they’re fast enough. Yes, after the first step, the whole thing could have been solved by search & replace. However, I’m practicing techniques here, and especially in larger codebases, this way is more controlled than letting S&R loose on everything.

refactor tests: add split function for CLI app arguments

The key element of this refactoring was getting familiar with ranges. And finding out that clang 12 does not yet support them, hence the use of ranges-v3.

The split function implementation in general holds little surprises:
– split the parameter string_view into a range of subranges
– convert the subranges to string_views
– convert the range of string_views to a vector of string_views

A surprise to me was the slightly ugly line necessary for the conversion from split view subranges to string_view. The canonical way would be to create the string_view from the begin and end iterators. However, in the libstdc++ 10 library I am using, string_view‘s iterator pair constructor checks a few concepts on the iterators to make sure they point to contiguous memory. That information gets lost during the split, and the split_view::inner_iterator that is returned by begin and end does not satisfy the constraint.

With the split function in place, I used it to replace the manually constructed vectors of multiple arguments.

refactor tests: move app execution boilerplate into separate function

With the split function it was now possible to further simplify the test cases. The CLI app’s only responsibility for now is to take the command line input and produce output and an exit code. To not clutter the test cases with technical details, I wanted to have a way to convert easily readable input into the two outputs in as little code as possbile.

With the run_result struct and the run_app function that seems to be achieved. For now, at least: The app will have dependencies in the future, and factoring them into the tests will change how the scaffolding for the test cases has to look.

One last detail in this commit is the new .clang-tidy configuration for the CLI tests. After I had started refactoring the test cases to look more redable, clang-tidy began to complain about the complexity. Talk about irony. The error messages hinted at the Catch2 macros. It seems they expand into relativley deeply nested conditionals which triggered the warning.

Since I do not plan to write complicated code in the unit tests, I opted to disable that specific warning completely for them.

refactor: introduce docopt

Even though it does not look like much, this one was what cost me more time than the other refactorings combined. It took multiple steps to get to this commit:

First, docopt had to be installed. Thanks to Conan, the installation was one line plus linking it into the CLI app. Then the integration into the run method had to start.

It started with only parsing the arguments and doing nothing with the results. docopt_parse throws up to four different exceptions, so I wrote a try with the necessary catch blocks:

try {
    auto const& parsed_args = docopt::docopt_parse(std::string(USAGE), argv, true, false, true);
    (void)parsed_args;
  } catch (docopt::DocoptExitHelp const&) {
  } catch (docopt::DocoptExitVersion const&) {
  } catch (docopt::DocoptLanguageError const& error) {
  } catch (docopt::DocoptArgumentError& argError) {
  }

An ArgumentError is thrown whenever the arguments and options do not fit the given usage pattern. This was the case for the “no arguments” test case, so I moved the treatment for args.empty() into the last catch block.

There is one other test case that triggered the exception. The usage pattern, according to the syntax docopt uses, allowed only one argument for the command. The test case passing fruits: apple banana cherries has multiple arguments, so the usage pattern had to be changed to [<args>...] everywhere.

A ExitHelp exception is thrown when docopt encounters the --help or -h option and because the first boolean argument is true – otherwise it would throw an ArgumentError in my case: According to the usage pattern, --help alone is not allowed – the <command> is not optional.

So I moved the treatment for the --help option to the catch block. Now there were two empty catch blocks left: ExitVersion would never ben thrown because the second boolean parameter of docopt_parse is false. Since a --version option is not a use case right now, I removed the catch block.

I removed the LanguageError catch block as well: That exception is thrown when the usage string can not be parsed. I don’t want to catch an exception because a broken program can not be treated in this case. I want the exception to be thrown and all tests to fail, should I mess up the usage pattern.

After the exceptions were handled, what remained were the “list” command and the “unknown command” test cases. Those were still outside of the try block and queried args[0], so I moved them into the block and refactored them to use parsed_args.

According to the usage pattern, there always is a <command>, and since it is free text and a single command, it is a string. Therefore there is no exception handling for the line auto const& command = parsed_args.at("<command>").asString(); even though both at() and asString() theoretically can throw exceptions. I can’t imagine how they would be thrown, or else I would have written test cases for that situation and exception handling to pass the tests. If they do throw one day, I want to know: let it crash so I can fix it.

There are a few minor things I observed during this refactoring that nag me a bit:
app::run now immediately takes the vector of string_view it gets and converts it into a vector of string for docopt. There is no more reason for the function to require string_views, i.e. the signature can be changed. (Docopt is unlikely to change its signature in the near future due to backward compatibility with C++11)
– Whenever I changed the usage pattern in app.cpp I had to change it for the unit test, and sometimes for the behave tests as well. It is questionable whether the tests should be that restrictive. I’ll observe this during the next days and weeks – if I have to touch it more often and change the test expectations every time, I will probably loosen them, e.g. just look for “usage: ” and “Available commands”


That’s it for this week. Did you spot mistakes I made? Would you have done things differently? Also, this dev diary format is completely new to me, is it something you find interesting? I’d love to know what you think, please leave your comments and questions below.

Previous Post
Next Post

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *