[fix dev diary] Week 4: Closing in on core domain code

Contents

In my dev diary blog post series, I document the minutiae of what I am doing for my toy project Fix. The diary can also be found in smaller bites on the GitHub pages of the project. Many of the subheadings link to the commits in question.

August 17th

After today, the “create” issue scenario is working, and the CLI app does no longer contain functionality that does not belong there.

Parse create command options

The main part of this commit is in the CLI app: Until now, the app passed a title to application_service::create that happened to get the correct result for the single unit test case. A second test case required to actually parse the --title and --descr (or -t and -d) options that were passed in the unit and behave tests all along.

To achieve that, I used docopt again, with a dedicated usage string for the “create” command. I see a pattern emerging in the future: most commands will have a usage string, the argument vector will be parsed by docopt, and the extracted arguments or options will be passed to a method of the application service.

It’s too early to write a generic solution for that, though, since there is only one command with that pattern yet. The actual implementation of the list command should be moved to the application service, but since “list” does not have arguments or options yet, I’d be generalizing 50% of the cases.

Move list functionality to application service

Speaking of which. The implementation of the “list” command, albeit rudimentary, is the only piece of domain logic left in the CLI app. This commit solves that by adding the list method to the application service and then calling it from the app.

Of course, I started by writing the test case again. It did go from compile error to green test in one keystroke because CLion was helpful enough to generate the method with the correct return 0 statement. To go by the books, I’d have had to change the return value or the expected value in the test to see it fail. Because, who can say for sure that a test is executed when you’ve never seen it fail? I’m pretty confident that it does get executed though, so I skipped that part. A tiny bit of pragmatism never hurts.

After having the test case, refactoring the CLI app was next, by calling application_service::list. While determining the number of issues belongs to the domain logic, the output format string does not. It is specific to the command-line interface, so it stays in the app.

Factor out methods for commands in CLI app

Parsing the arguments to the “create” command, creating and calling the application service, and processing the return values has put quite some code into the run_command method in the CLI app. In addition, the method now does several things: the logic for the two commands and the logic to select which command to execute. So it’s time to refactor.

The refactoring is one of the easiest to do: move the body of each if block into its own function. There could be an argument made for separating the argument parsing and the logic of the create method. If you look at the code, you already see the blank line that separates the method into two distinct blocks.

For now, I’ll leave the two together for two reasons: the first is that the method is good enough as it is, for now, there are more important things to work on. Pragmatism trumps dogmatism, at least in my book. The second reason is that, as mentioned above, I expect more commands to emerge with a similar pattern. I like to see the code that makes up the pattern in one place and be able to extract the code easily when I come to refactor it.

implement steps for “created issues are listed” scenario

With the code being good enough (for my taste), it’s time to move on to the next behave scenario. As usual, the first thing to do is to implement the missing step definitions to make it run and produce actual errors I can work on in the C++ implementation.

The first step is a variation of a step that already existed (creating an issue), so the common functionality is factored out into a separate function. The second step, printing a list of issues, is new. The table in the scenario is stored in the context that behave passes to the step definition.

The implementation of the step defines some of the behavior the CLI app has to implement: it first expects a title line with the column headings “ID”, “STATUS”, “TITLE”, separated by arbitrary non-word characters. That way the test won’t fail immediately when I decide to add some pretty-printing to the table, e.g. by separating the columns with pipe symbols. Then the content of the table is expected in the same order as the column headings, including the [hash] placeholder, and again separated by non-word characters.

In hindsight, there are two things I could have done better in this commit: The regular expressions are a bit unwieldy and would benefit from a short explanatory comment, as most regular expressions do. The other thing is that the whole python file delegates anything that has to do with I/O to the fix_cli module, except the line that checks for the column headings. The functionality needed does not yet exist in that module, but that’s no reason to become sloppy.

I’ll sneak the regex comments in with the commit for this dev diary entry. The refactoring will be part of one of the next commits.

A slight change of process

Until now, I have been implementing things depth-first: Implement the behave step definitions for one scenario. Implement the CLI app functionality to make the step pass, refactor, go on to the next scenario. Until now that has been relatively fast switching between two layers (behave and the CLI app), but in the future, there will be more. I just introduced the application layer, the next scenario will require an infrastructure layer for the repository and storage, and the first domain classes will be created.

Continuing by single scenarios would mean constantly switching between all those layers. Instead, I want to go breadth-first: write all step definitions needed for the whole feature file, then all needed functionality in the domain library, then the infrastructure layer, then the CLI app. In the end, tie everything together and see that the behave tests pass.

In other words, I want to implement what is needed for one feature one layer at a time, not one scenario at a time. Whether the implementations in the layers will occur in the exact sequence I wrote above is not that important and remains to be seen.

August 19th

A little clean-up work and the last step for the list/create feature file.

implement missing step for current feature

As discussed in the last dev diary entry, I implemented the step definitions needed to run the rest of the scenarios in the current feature file. As it turned out, there was only one step left to implement, and it was a combination of two existing steps.

I also fixed that the step definitions bypassed the fix_cli module at one point which I also shortly discussed last time. As of now, all remaining scenarios in the feature file are tagged @wip.

If you wonder about the change in the definition of the behave_wip_tests: the -w option of behave executes tests that are tagged @wip and has reduced output, but it also stops at the first error. That was enough until now, but to see the missing step definitions, I wanted to execute all tests with the tag and see all errors.

The new options are relatively close to what -w does but behave does not stop on the first error. The -k option omits the listing of scenarios that have been skipped due to the tag selection: I would not want the output of a handful of WIP scenarios to be hidden among listings of scenarios that are not interesting.

August 20th

Today I started to implement the first actual domain class.

outline for issue title tests

In the big picture, I am still working on creating issues. To create an issue, we need titles and descriptions, among other things. But wait, are those not just strings? They are not. “@{&%#\\n```what” is a string but not a valid title, nor is the empty string. The requirements say that a title has to be 6-120 characters long and has a certain character set.

In addition, a title is not the same as a description, so the two can’t be the same type. Nor can they be the same type as any other “string-y” objects in the domain. Having separate types for separate concepts is not only good practice in Domain-Driven Design. Having strong types is good practice, in general, to avoid mixing up variables of different semantics.

Having normal strings as input there needs to be a point where they are converted into a title object. All the constraints have to be checked there so we can not end up with an invalid title. The classic solution to this conundrum is to throw an exception from the constructor if the arguments do not fit the requirements. There are other options though, especially for code where exception handling is not wanted or available.

One of those options is having a static factory method that checks the arguments and then creates the object or returns a failure. To ensure that objects are created via that factory method, constructors are made private. The “object or failure” can be modeled by using pointers, std::optional, std::variant, and similar mechanisms. One of those is the proposed std::expected which I aim for.

But I am getting ahead of myself. This commit mainly introduces the test outline for the restrictions on length and character set of the title class. For now, it is a bunch of empty Catch2 SECTIONs that describe the requirements they will check. If you’re wondering why they are not called “test create” or similar as so often, have a look at Kevlin Henney’s brilliant talk about Good Unit Tests.

To trigger the creation of the class and the factory method, a single test calls the method and expects it to fail. I could have returned the error reporting type I want to use right from the beginning, but for now, I opted for bool as that’s sufficient to pass the single test.

To not inflate the scope of tests, I opted to restrict the title character set to ASCII for now. The product owner agrees.

implement length checks for title

Implementing the length checks is rather straight-forward. I did not go for the dogmatic approach to write one test at a time and make it pass. Instead, I first wrote all the checks in the test case and then implemented the rather simple check to make them pass.

Let’s have a quick look at the checks and sections:
– The “too short” check is exactly one character shorter than the minimum length, and the first case in the “allowed range” check is exactly of minimum length.
– The “too long” check is exactly one character longer than the maximum length, and the second case in the “allowed range” check is exactly of maximum length.

It is considered good testing practice to have checks at the edges of ranges where behavior changes. After all, one-off errors are among the most common bugs in software. (In hindsight, instead of “short” I should have used a string that explicitly has length MIN_LENGTH-1, in case the minimum length changes one day)

In principle, the “empty string” case is part of the “too short” case, but it gets an explicit mention because of string/string_views’s default constructor.

add non-exceptional error handling for title::create

To get away from the bool return type and towards one of the “object or failure” return types mentioned earlier, I wrote a test case that needed an actual title object to be returned. Since the tests do not have access to the title constructor, I needed to do something with the title, and the best candidate for that was the conversion back to a string, as we’ll need it anyway in the future.

The return type I use for now is std::optional. It is sufficient to hold either a value or signal a failure. Switching to std::expected later will not be a problem as the interface that I have used so far is identical in both types. The motivation for std::expected will be to have distinct failures for the different actions that can fail and for the different ways these actions can fail.

August 21st

On the way to std::expect<T, std::error_code>.

add domain_error enum with std::error_code support

Since I want to use std::expect, I had to decide what the error type should be. A plain enum would work, or even just integers, or strings. The latter does not seem a good choice to me, and integers would be either magic numbers or a bunch of predefined constants. That is not really different from enums, except enums are a bit safer in terms of only allowing predefined values.

Then again, if an error is not a domain error but e.g. a file system error or similar, I do not want to mess with std::expect with different error types. That is where std::error_code comes in. It provides the combination of an enumerator and an error category, the latter includes conversion of the enumerators to printable error messages.

I won’t go into much detail about what I did here because I mostly tagged along with this blog post by Andrzej Krzemieński to implement the necessary steps to use std::error_code together with the domain_error enum. However, I let the implementation be driven by TDD again – mostly:

Creating the std::error_code from the enum already requires the header parts you see below the “wiring to enable…” comment. The implementation of the make_error_code function already pretty much requires all you see in the source file. The only thing I see in hindsight that is not test-driven is the implementation of the name method of the fix error category. I should fix that soon.

Having these few lines is all that is needed to return std::expect<T, std::error_code> from my domain functions instead of std::optional. All that is needed is adding enumerators to the enum as needed, and implementing the conversion into readable error messages.


That’s it for this week. Did you spot mistakes I made? Would you have done things differently? I’d love to know what you think, please leave your comments and questions below.

Previous Post
Next Post

Leave a Reply

Your email address will not be published. Required fields are marked *