So I did exactly this. I bought this book (second edition) and started by Rust journey.
The book is well organized, starting with the most basic concepts and then gradually adds more and more advanced concepts of the Rust programming language.
It is written in a style that even if you haven’t learned any programming language before, you can follow the explanations and learn programming using Rust.
If you are already familiar with other programming languages like C#, C++ or even F# you can easily relate to some of the concepts presented while others will probably very new.
So in summary, this book is definitively a great start to learn Rust, as beginner as well as an experienced software developer!
]]>When I read about a new book by Uncle Bob on X, I was immediately excited because so far I had read almost all of his books and I love functional programming since years. A perfect match!
So I pre-ordered the book already march 2023 and immediately started reading when it finally arrived in january 2024.
Functional Design: Principles, Patterns, and Practices
The book starts with basic concepts of functional programming like immutability, recursion and laziness.
It continues comparing object-oriented programming (OOP) solutions and functional programming (FP) solutions using multiple examples to explain the differences between both approaches and to emphasize certain benefits of FP over OOP.
Uncle Bob then explores software design in FP by discussing how SOLID principles apply to FP and how common design patterns are applied to FP.
He finishes the book with a comprehensive case study on how to design and implement an entire game in FP.
All in all, Functional Design: Principles, Patterns, and Practices is definitively a solid introduction into FP for OOP developers. It is full of practical code samples - which are all in Clojure. In fact, to me it felt like half the book is filled with Clojure code.
For the first half of the book I tried to follow the code samples and then concluded: I don’t want to learn Clojure - at least not now - as Clojure is a dynamic programming language and I am clearly a fan of the power of statically typed programming languages like F#.
I finally decided to skip most of the Clojure examples, focused on FP concepts and design aspects and finished the second half of the book in half a day.
If you already have solid practical experiences in FP like me, then this book will not offer much new insights to you.
If you are new to FP and you like the freedom of dynamically types programming languages, then this book is certainly a great starting point for your FP journey!
If you are new to FP and you prefer the safety of strongly typed programming languages, then I rather recommend this book Domain Modeling Made Functional: Tackle Software Complexity with Domain-Driven Design and F# to start your FP journey.
]]>What if I would tell you that Task<T>
is nothing but a fancy delegate which is not coupled
to threads at all?
Let’s start with some very simple component. It provides a single API which accepts a request object, does some computation and returns a response object.
Such an API design obviously forces the caller to “actively” wait for the response to be available in order to be able to continue the processing. This design couples the control flow of processing a request and the control flow of handling its response.
Now let’s assume you want to decouple these two parts of the control flow e.g. because the processing of the request is delayed because of some queue or it runs in a different thread or process or even on a different machine or in the cloud.
One common way to achieve this in .NET is using events.
(Hint: This sample code is kept as simple as possible, event deregistration is skipped for this reason.)
An alternative to .NET events would be passing a simple delegate which we could also call continuation.
In order to improve the readability of this design, let’s get it closer to the initial request-response
semantic by inventing a small class called Promise
which is returned from the Execute
API
and which is then used to hook up the continuation.
Let’s also add error handling to this design.
The implementation of the IComponent
interface would use the Promise
class like this:
Now this design is already pretty close to the one of Task<T>
which shows us that Task<T>
is actually
nothing but an implementation of a promise provided by .NET.
But what about async/await
? Isn’t that what makes Task<T>
so powerful?
Well, actually async/await
is an compiler feature which is completely independent from Task<T>
and
the Task Parallel Library (TPL). To prove this, let’s enable it for our custom Promise
implementation as well.
Therefore, we just need to provide an API (e.g. an extension method) called GetAwaiter
which returns a type
or an interface which has the following properties:
INotifyCompletion
IsCompleted
GetResult
And with this the compiler allows us to use async/await
for the Execute
API as usual.
In essence, that’s exactly how Task<T>
works!
So let’s use it instead of our custom Promise
implementation.
Quod erat demonstrandum.
Task<T>
and async/await
are quite convenient and powerful concepts when dealing with asynchronous APIs
and concurrency BUT neither of these concepts is causing (!) threads or concurrency.
Full source code: https://github.com/plainionist/AboutCleanCode/tree/main/Promise
]]>Behavior driven development (BDD) and the Gherkin language, first and foremost, are about collaboration and documentation.
The key idea is to specify the behavior of a software system by describing its features using concrete scenarios and examples. By describing the scenarios in the language of the domain and by doing this together with the domain experts, we ensure that each feature is completely covered by its scenarios and that each scenario is specified correctly.
Using this approach ensures that we build the right system.
We write the scenarios using the Gherkin language which is designed to be both, human and machine readable. By parsing the scenarios and automating its steps we turn the specification into executable test cases which verify that the implemented features behave as specified.
Using this approach ensures that we build the system right.
Now, reading raw Gherkin in an IDE is fine when developing and reviewing a particular feature and its scenarios. But when it comes to providing a long term specification, HTML has clear benefits over Gherkin with respect to readability of the scenarios and navigation between features.
For this reason I have added support for generating HTML documentation from Gherkin based feature files to the TickSpec extension TickSpec.Build.
To generate the HTML documentation run the following command:
TickSpec.Build doc ./src ./html
The first folder specifies the root of the source tree which should be searched for *.feature
files.
The second folder specifies the location where the HTML files should be generated to.
You can safely specify the root to your complete code base, as folders containing build artifacts
like obj
and node_modules
are ignored.
The command will generate a separate HTML file for each feature file. The Visual Studio project local
folder structure will be preserved. Each HTML file intentionally only contains a HTML fragment of
type <article/>
so that these articles can easily be integrated in an existing HTML page.
Each article is organized in different sections which provide dedicated CSS classes for styling. Checkout the TickSpec.Build documentation for a complete list of available CSS classes.
You can use the command line option --toc html
to get a table of contents generated as a standalone HTML page.
This option will also add a reference to a CSS stylesheet to each article which can be used for defining
the CSS classes mentioned above. This stylesheet needs to be named style.css
and has to be located
next to the generated table of contents file.
Alternatively you can use the command line option --toc json
to get a table of contents generated as a
JSON document which can then be used for integrating the generated HTML articles into an existing HTML
documentation or web page.
Personally, I use this HTML generation feature in one of my projects to integrate the BDD specification into an existing web application based on Vue.JS.
Therefore I configure webpack to load the generated HTML articles as raw text files by adding the following
configuration to the vue.config.js
:
configureWebpack: {
module: {
rules: [
{
test: /\.html$/,
include: [
path.resolve(__dirname, "src/assets/spec")
],
loader: 'raw-loader'
}
]
}
}
Next, I created a dedicated Vue component to render the specification including the table of contents read from the JSON document and some basic search support.
<template>
<article>
<sh-text>
<h2>Table of Contents</h2>
<input type="text" v-model="search" placeholder="Search ..."/>
<span v-if="!articles.length">No Results Found.</span>
<div v-else v-for="(article, index) in articles"
:key="'article_' + index + Math.random()">
<a @click="selected = article"></a>
</div>
<div v-html="selected?.html" />
</sh-text>
</article>
</template>
<script>
import toc from '@/assets/spec/toc.json'
export default {
name: 'BackLookSpecs',
data () {
return {
search: '',
selected: null,
store: []
}
},
computed: {
articles () {
return this.store.spec
.filter(item => {
return item.title.toLowerCase().includes(this.search.toLowerCase()) ||
item.html.toLowerCase().includes(this.search.toLowerCase())
})
.sort((a, b) => a.path < b.path ? 1 : 0)
}
},
created () {
const specFiles = require.context('@/assets/spec', true, /\.html/)
const tocIndex = new Map(toc.map((x) => ['./' + x.folders.concat([x.filename]).join('/'), x]))
this.store = specFiles.keys()
.map((x) => {
const tocEntry = tocIndex.get(x)
return {
...tocEntry,
path: tocEntry.folders.concat([tocEntry.title]).join('/'),
html: specFiles(x).default
}
})
}
}
</script>
<style>
.gherkin-keyword {
color: blue;
}
.gherkin-tags {
font-weight: bold;
padding-right: 5px
}
</style>
TickSpec.Build also supports basic MsBuild integration for the HTML documentation generation feature which is described in the project documentation.
In my particular project I have multiple Visual Studio projects containing BDD feature files and I have one specific Visual Studio project which hosts the help system of the web application. In this web project I have integrated the HTML documentation generation for all the feature files in the following way:
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<FeatureFileHtmlInput>..\..\</FeatureFileHtmlInput>
<FeatureFileHtmlOutput>WebUI\src\assets\spec\</FeatureFileHtmlOutput>
<TickSpecBuildTocFormat>json</TickSpecBuildTocFormat>
</PropertyGroup>
<!-- all the other content not relevant for this post -->
<ItemGroup>
<PackageReference Include="TickSpec.Build" Version="2.8.0" />
</ItemGroup>
<Target Name="ClearSpecs" BeforeTargets="GenerateFeatureFileHtml">
<ItemGroup>
<FilesToDelete Include="$(FeatureFileHtmlOutput)\**\*.html"/>
</ItemGroup>
<Delete Files="@(FilesToDelete)" />
</Target>
</Project>
Using this approach I can now publish my system specification with improved readability and easy to use navigation features to the users of my application. Of course this specification does not replace a real user manual but it definitively can serve as a reference in case of specific questions on how the system behaves.
Just recently, I wrote about my BDD approach in one of my projects in this article. I have used this setup now for a while and it actually worked quite well for me but there is one thing which turned out to be quite annoying over time.
The problem was: the Test Explorer couldn’t identify the sources of a test case which means, if the test case failed, I couldn’t simply double click the sources to navigate to the test case and also starting the test case with debugger attached was not easily possible.
The reason of this problem was that the test cases were generated at runtime using reflection and NUnits TestCaseSource
mechanism which means there are simply no sources for these test cases.
// GENERATED DURING BUILD
namespace Specification
open TickSpec
open NUnit.Framework
open System.Reflection
[<TestFixture>]
type ``Best Effort work items explicitly accepted should be highlighted in the backlog``() =
inherit AbstractFeature()
static member Scenarios =
AbstractFeature.GetScenarios(Assembly.GetExecutingAssembly(),
"AcceptingBestEffortImprovements.feature")
type AbstractFeature() =
static member GetScenarios(assembly:Assembly, featureSource) =
let createTestCaseData (feature:Feature) (scenario:Scenario) =
let scenarioName =
scenario.Parameters
|> Seq.fold (fun acc p -> acc.Replace("<" + fst p + ">", snd p)) scenario.Name
|> fun x -> Regex.Replace(x, "^Scenario: ", "")
(new TestCaseData(scenario))
.SetName(scenarioName)
.SetProperty("Feature", feature.Name)
|> Seq.foldBack(fun tag data -> data.SetProperty("Tag", tag)) scenario.Tags
let definitions = new StepDefinitions(assembly.GetTypes())
let createFeature (featureFile:string) =
let stream = assembly.GetManifestResourceStream(featureFile)
let feature = definitions.GenerateFeature(featureFile, stream)
feature.Scenarios
|> Seq.map (createTestCaseData feature)
assembly.GetManifestResourceNames()
|> Seq.filter(fun x -> x.EndsWith(".feature", StringComparison.OrdinalIgnoreCase))
|> Seq.filter(fun x -> x.EndsWith("." + featureSource, StringComparison.OrdinalIgnoreCase))
|> Seq.collect createFeature
|> List.ofSeq
[<TestCaseSource("Scenarios")>]
member this.Bdd (scenario:Scenario) =
if scenario.Tags |> Seq.exists ((=) "ignore") then
raise (new IgnoreException("Ignored: " + scenario.ToString()))
try
scenario.Action.Invoke()
with
| :? TargetInvocationException as ex ->
ExceptionDispatchInfo.Capture(ex.InnerException).Throw()
The only solution to this problem I saw was generating the test cases as source code at build time.
For that, I first created a parser which reads the feature title as well as the scenario titles from the feature files of a given project.
let ReadFeatureFile file =
let linesWithLineNo =
File.ReadAllLines(file)
// start counting lines with 1 as in any editor
|> Seq.mapi(fun i l -> i + 1, l)
|> List.ofSeq
let grep prefix =
linesWithLineNo
|> Seq.filter(fun (_,x) -> x.StartsWith(prefix, StringComparison.OrdinalIgnoreCase))
|> Seq.map(fun (i,x) -> i, x.Trim(), x.Substring(prefix.Length).Trim())
{
Name = grep "Feature:" |> Seq.exactlyOne |> fun (_,_,x) -> x
Filename = Path.GetFileName(file)
Scenarios =
grep "Scenario:"
|> Seq.append (grep "Scenario Outline:")
|> Seq.map(fun (lineNo, name, title) ->
{
Name = name
Title = title
StartsAtLine = lineNo + 1 // skip scenario title
})
|> List.ofSeq
}
Then I had to update the existing code generator to also generate test cases (methods) for each scenario. Earlier I used a template engine to generate the code but the engine I have chosen turned out to be too limited for the new needs so I removed it again and now generate the code using strings and a TextWriter.
let writeHeader (writer:TextWriter) =
writer.WriteLine("namespace Specification");
writer.WriteLine()
writer.WriteLine("open System.Reflection")
writer.WriteLine("open NUnit.Framework")
writer.WriteLine("open TickSpec.CodeGen")
writer.WriteLine()
let writeTestCase (writer:TextWriter) featureFile scenario =
writer.WriteLine($" [<Test>]")
writer.WriteLine($" member this.``{scenario.Title}``() =")
writer.WriteLine($"#line {scenario.StartsAtLine} \"{featureFile}\"")
writer.WriteLine($" this.RunScenario(scenarios, \"{scenario.Name}\")")
writer.WriteLine()
let writeTestFixture (writer:TextWriter) feature =
writer.WriteLine($"[<TestFixture>]")
writer.WriteLine($"type ``{feature.Name}``() = ")
writer.WriteLine($" inherit AbstractFeature()")
writer.WriteLine()
writer.WriteLine($" let scenarios = AbstractFeature.GetScenarios(")
writer.WriteLine($" Assembly.GetExecutingAssembly(), \"{feature.Filename}\")")
writer.WriteLine()
feature.Scenarios
|> Seq.iter (writeTestCase writer feature.Filename)
For now this is a feasible approach as most of the actual logic, needed to find and execute a particular scenario using TickSpec, is still in a base class similar to the one shown above.
Finally, I even added #line
directives pointing to the feature file instead of the
generated “code behind” which causes VS Test Explorer to navigate the particular scenario in the
feature file when double clicking the test case.
[<TestFixture>]
type ``Highlight accepted BestEffort work items``() =
inherit AbstractFeature()
let scenarios = AbstractFeature.GetScenarios(
Assembly.GetExecutingAssembly(), "AcceptingBestEffortImprovements.feature")
[<Test>]
member this.``Rendering the initiative backlog``() =
#line 4 "AcceptingBestEffortImprovements.feature"
this.RunScenario(scenarios, "Scenario: Rendering the initiative backlog")
[<Test>]
member this.``Rendering the team backlog``() =
#line 11 "AcceptingBestEffortImprovements.feature"
this.RunScenario(scenarios, "Scenario: Rendering the team backlog")
[<Test>]
member this.``Rendering the team improvements backlog``() =
#line 18 "AcceptingBestEffortImprovements.feature"
this.RunScenario(scenarios, "Scenario: Rendering the team improvements backlog")
The “upgraded” approach is available as individual GitHub project as well as NuGet package.
Give it a try in your next F# and BDD project and let me know how it works for you.
]]>As already mentioned in other posts, one of my projects is a web application which aims to bring maximum transparency into backlogs of agile teams. Over time this application grew quite a bit, accumulated quite some features and got used by more than two dozen teams.
Of course, one important strategy to ensure the quality of this application is test automation. I never followed the classic testing pyramid which is based on tons of classic unit tests but rather focused on describing features and scenarios in a kind of a BDD style.
Favoring pragmatic setups, I wrote those tests without any “real” BDD framework which worked quite well for quite some time, but in the recent weeks and months I realized that my setup needs some improvement.
So I decided to invest some time, do some evaluation and start migrating my tests to a “real” BDD framework using feature files written in Gherkin language.
And this is how my journey went so far …
As already said, so far I tried practicing BDD without any framework based on feature files and Gherkin language. Even though the feature files look nice and are easy to read, even by non-technical stakeholders, tests written using this approach have a severe drawback: there is no compiler which ensures that the steps, used to describe a scenario are actually compatible. This means, we can only detect at run-time that e.g. the “output” produced by a particular GIVEN step is not compatible to the “input” required by a particular WHEN step.
For me, valuing a lot the idea of avoiding bugs by design and with the help of the compiler (type safety) where ever possible, this is definitively a severe drawback.
The alternative to an external Domain Specific Language (DSL) like Gherkin is an internal DSL which is based on the capabilities of a general purpose programming language. Hence, such a DSL can benefit from the features of the underlying programming language, like the compiler, but at the same time is also limited to syntax of this language.
As the application is developed in F# and F# provides some really nice features which make it very convenient to create internal DSLs (e.g. identifiers allowing spaces, operator overloading), to me it felt like an obvious consequence to describe all features and scenarios in F# directly.
Here is an example of such a scenario:
[<TestFixture>]
module ``Reading remaining work`` =
[<Test>]
let ``WorkItem not implemented/done, empty Remaining Work treated as missing estimation`` () =
WorkItem.Create(WorkItemTypeNames.WorkPackage, [
WorkItems.Fields.State, "In Work"
WorkItems.Fields.RemainingWork, null
])
|> Read.RemainingWork
|> should equal None
WorkItem.Create(WorkItemTypeNames.UserStory, [
WorkItems.Fields.State, "In Work"
WorkItems.Fields.Tags, "EST CL MEDIUM"
])
|> Read.RemainingWork
|> should equal None
Even though I used this approach successfully for years to ensure the quality of the application, I never managed to reach a readability of these scenarios such that these could have served as “requirement documentation” as well. This may seem kind of obvious to you but when I started using an F# based DSL, I was optimistic to develop a syntax which could be understood by other stakeholders as well, without any prior F# knowledge. Well, maybe I was a bit too optimistic ;-)
Time to evaluate the alternative …
The almost standard framework and tool for Gherkin based BDD in .NET is SpecFlow. It is definitively a great BDD framework and I even use it in another, C# based, project. Unfortunately the F# support is quite limited, so I continued my research and finally found TickSpec which even claims in the project description on GitHub to provide a “powerful F# integration”.
The “killer feature” from my perspective: TickSpec allows passing values “directly” from one step to the next. There is still no compiler which ensures that the steps used are actually compatible but at least the required “inputs” of a step are make explicit on its “API surface”.
Getting started with TickSpec turned out to be pretty simple. I installed the respective NuGet package and wrote my first Gherkin based scenario:
Feature: Best Effort work items explicitly accepted should be highlighted in the backlog
Scenario: Rendering the initiative backlog
GIVEN an improvement
AND BacklogSet is set to 'Best Effort'
AND Decision is set to 'accepted'
WHEN rendering the initiative backlog
THEN the BacklogSet column is highlighted
The respective feature file I included in the Visual Studio project “.fsproj” as embedded resource:
<EmbeddedResource Include="AcceptingBestEffortImprovements.feature" />
To automate this scenario I provided so called “step definitions” in a simple F# module (comparable to a static class in C#). The module name doesn’t have to follow a specific convention and steps can be organized in multiple modules.
Here are some of these step definitions:
// returns a IWorkItem
let [<Given>] ``an improvement`` () =
WorkItem.Create(WorkItemTypeNames.Improvement, [
WorkItems.Fields.IterationPath, VB99A.PlanningSheet.ReleaseIterationPath
])
// requires a IWorkItem e.g. from previous step
let [<Given>] ``BacklogSet is set to '(.*)'`` (value:string) (wi:IWorkItem) =
wi.With(WorkItems.Fields.BacklogSet, value)
To execute such a scenario using one of the popular (unit) test frameworks like NUnit or XUnit the TickSpec project suggests to create a generic test fixture which loads all scenario of all feature files at run-time and dynamically builds test cases using e.g. NUnit’s TestCaseSource feature.
I copied the provided FeatureFixture.fs for NUnit and got my first scenarios running within a few minutes.
I executed my scenarios in Visual Studio using the built-in Test Explorer. The scenarios turned green and it confirmed that passing data from one step to the other work as promised.
But this approach had one ugly flaw:
As there was just a single test fixture which dynamically creates test cases, all scenarios of all features got grouped together and visualized in the Test Explorer below a single test class. As a workaround I could have configured the Test Explorer to group the test cases (scenarios) by “traits” (the generic test fixture provides the feature description as a property to NUnit which is interpreted by the Test Explorer as a “trait”). But as I still had lots of tests following my previous approach this workaround didn’t felt convenient.
I did some experiments with the TestCaseSource and the TestFixtureSource feature of NUnit but I didn’t found a way get the scenarios visualized as parts of the respective feature.
The only feasible approach seamed to convert the generic test fixture into a base class and to create a derived class for each feature file which would then pass the name of the feature file explicitly from which the scenarios should be read and test cases should be created. I could then copy the feature description from each feature file and use it as name of the respective derived test fixture.
Simple, but a clear violation of the DRY principle.
And this is where code generation comes into the picture …
I recently read about Source Generators in .NET 6 but unfortunately these only support C#.
From some other projects long, long ago I remembered “T4 templates” and some more research revealed Mono.TextTemplating which “started out as an open-source reimplementation of the Visual Studio T4 text templating engine, but has since evolved to have many improvements over the original, including support for C# 10 and .NET 6.” (from the projects README.md).
It felt like a perfect fit so I decided to give it a try.
As mentioned above, I converted the generic test fixture into a base class which takes the name of the relevant feature file as parameters. This means the T4 template simply would have to
After a few attempts this is the template I came up with:
<#@ template language="C#" hostspecific="true" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Linq" #>
<#@ parameter name='featuresFolder' #>
namespace Specification
open TickSpec
open NUnit.Framework
open System.Reflection
<# foreach(var featureFile in Directory.GetFiles(featuresFolder, "*.feature")) { #>
<#
var fileName = Path.GetFileName(featureFile);
var title = File.ReadAllLines(featureFile)
.Select(x => x.Trim())
.First(x => x.StartsWith("Feature: ", StringComparison.OrdinalIgnoreCase));
title = title.Substring("Feature: ".Length);
#>
[<TestFixture>]
type ``<#= title #>``() =
inherit AbstractFeatures()
static member Scenarios = AbstractFeatures.GetScenarios(Assembly.GetExecutingAssembly(), "<#= fileName #>")
<# } #>
Note that I had to specify hostspecific="true"
`` in order to be able to pass parameters to the template.
Mono.TextTemplating provides a command line tool “dotnet-t4” to generate code using a T4 template but to get the code generation easier integrated into the build process of my project and to have more flexibility later on, I decided to use the library and host the template engine in my own CLI.
First, I tried Mono.TextTemplating nuget package in a F# based command line program but this failed at run-time, complaining that the proper “.NET hosting” was not found. A quick research didn’t reveal any solution. The project documentation also offered a Mono.TextTemplating.Roslyn package which “can be used to bundle a copy of the Roslyn C# compiler and host it in-process. This may improve template compilation performance when compiling multiple templates, and guarantees a specific version of the compiler.” So I converted my program to C# and installed this package instead. The core logic are just these few lines of code:
var template = args[0];
var output = args[1];
Console.WriteLine($"Generating '{template}' -> '{output}'");
var generator = new TemplateGenerator();
generator.AddParameter(null, null, "featuresFolder", Path.GetDirectoryName(output));
var success = generator.ProcessTemplateAsync(template, output).Result;
if (!success)
{
foreach(var error in generator.Errors)
{
Console.Error.WriteLine(error);
}
}
return success ? 0 : 1;
I successfully executed the program on the command line, passing the template and the output file name as arguments. The generated derived test fixture looked like this:
namespace Specification
open TickSpec
open NUnit.Framework
open System.Reflection
[<TestFixture>]
type ``Best Effort work items explicitly accepted should be highlighted in the backlog``() =
inherit AbstractFeatures()
static member Scenarios =
AbstractFeatures.GetScenarios(Assembly.GetExecutingAssembly(),
"AcceptingBestEffortImprovements.feature")
When I executed my scenarios in the Test Explorer again I got those grouped and visualized as expected: scenarios below the respective feature and the features (test classes) properly named.
The final step left was to integrate my little code generator into the build process of my project so that
new derived classes get generated whenever I create a new feature file.
The simples approach was to create a new target in the test project and use the BeforeTargets
attribute to hook
it into the right place of the build process:
<Target Name="GenerateFeatures" BeforeTargets="BeforeBuild;BeforeRebuild">
<Exec Command="$(MSBuildProjectDirectory)\..\Build.CodeGen\bin\Debug\Build.CodeGen.exe $(MSBuildProjectDirectory)\FeatureFixture.tt" Outputs="FeatureFixture.fs">
<Output ItemName="Generated" TaskParameter="Outputs" />
</Exec>
<ItemGroup>
<FileWrites Include="@(Generated)" />
</ItemGroup>
</Target>
Hint: “FileWrites” tells MsBuild that these files should be cleaned up during a “clean build”.
Of course this build script can be further improved, e.g. to support incremental compile and certainly should be factored out so that it can be reused for other test projects in the code base. These improvements and further details I have covered in this video:
And this is the current status of my BDD, Gherkin & TickSpec journey.
Small add-on: As Visual Studio (which I mostly use for this project) doesn’t support proper syntax highlighting for T4 and Gherkin I installed the OpeOpen in Visual Studio Code extension and edit these files in VS Code using the T4 Support and Cucumber (Gherkin) Full Support extensions.
And with this setup I now feel fully enabled to write all new tests using the new Gherkin and TickSpec based approach and I will also migrate all existing scenarios to the new approach step-by-step.
Update: The journey continues here
]]>When I started my YouTube journey I was looking for good advice. I researched successful content creators who shared their journey and experience so that I could learn quickly. I was not after any quick tips, tricks or hacks. I wanted to learn the key fundamentals, the methods and techniques to provide quality content.
One day I came across “Show Your Work”. The title got me immediately, but I was not sure what I would find inside. In the process of buying a copy, I noticed that Austin Kleon also had published a bestseller before that one, (Steal Like an Artist), and one afterwards: Keep Going. Having a habit of always buying more books than I am actually able to read, I decided to buy all three immediately, read those in order and in a row.
I really enjoyed reading these three books. I don’t remember I ever read a book of an artist about art - well, long back I did read “The Design Of Everyday Things” but I think this does not count. I learned about new perspectives, different ways of looking at things, but I also found ideas in these books I could closely relate to.
During reading I did not only reflected on content creation but also on software development as such. I reflected on decisions I took, routines I have and things I do and how I do them.
And why do I tell all this?
I think every developer should “steal like an artist” - which is NOT about copy & pasting code from Stackoverflow, as you will learn in the first part of the first book - but I also believe everyone of us should also give something back to the community so that others can steal from us. Show your work! Share your projects on GitHub, write a blog or publish some YouTube videos. It is not only about “giving”, it is also about receiving more at the same time, due to feedback but also by the process of “teaching” which requires often an even deeper understanding of a topic.
So here are some of my takeaways from these books:
“Worry less about getting things done. Worry more about things worth doing!”
]]>MediatR is a popular library in .NET used to decouple components. In Clean Architecture we aim to keep the core of the application as independent as possible from such “details” like third-party libraries and frameworks.
Nevertheless, I have recently read quite some articles and watched some great videos about using MediatR in Clean Architecture based projects.
But doesn’t the usage of MediatR in Clean Architecture break the Dependency Rule?
To answer this question let’s analyze some concrete examples.
In the first example the MediatR library is used to decouple the processing of a web request from the application logic computing some weather forecast.
[ApiController]
[Route("[controller]")]
class ForecastController : ControllerBase
{
private readonly IMediator myMediator;
public ForecastController(IMediator mediator)
{
myMediator = mediator;
}
[HttpGet(Name = "GetWeatherForecast")]
public async Task<IActionResult> Get()
{
var weather = await myMediator.Send(new ForecastRequest());
return Ok(weather);
}
}
class ForecastRequest : IRequest<IReadOnlyCollection<Forecast>> { }
class ForecastHandler : IRequestHandler<ForecastRequest, IReadOnlyCollection<Forecast>>
{
private static readonly string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild",
"Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
public Task<IReadOnlyCollection<Forecast>> Handle(
ForecastRequest request, CancellationToken cancellationToken)
{
var result = Enumerable.Range(1, 5)
.Select(index => CreateWeatherForecast(DateTime.Now.AddDays(index)))
.ToReadOnlyCollection();
return Task.FromResult(result);
}
private Forecast CreateWeatherForecast(DateTime date) =>
new()
{
Date = date,
TemperatureC = Random.Shared.Next(-20, 55),
Summary = Summaries[Random.Shared.Next(Summaries.Length)]
};
}
The Asp.Net controller sends a specific ForecastRequest
to the mediator
which then locates the respective request handler in the application layer.
Once the weather forecast is computed and returned by the ForecastHandler
the mediator transfers it back to the controller which then returns it to
the web client.
When we visualize the dependencies between these classes we get this picture:
The Asp.Net controller obviously depends on MediatR due to the usage of the IMediator
interface to send the request. As the Asp.Net controller is located in the frameworks layer
(due to its dependency to the Asp.Net framework) this dependency to MediatR is valid according
to the Dependency Rule.
But we also see that the ForecastHandler
as well as the ForecastRequest
- both
located in the application layer (use cases layer) - having dependencies to MediatR due to the
MediatR interfaces these classes implement. These dependencies clearly violate the Dependency Rule.
Let’s look at another example I have found. In this example MediatR is used inside the
RegistrationApplicationService
- an application or domain service located in the
use cases layer - to publish a domain event.
class RegistrationApplicationService
{
private readonly IMediator myMediator;
public RegistrationApplicationService(IMediator mediator)
{
myMediator = mediator;
}
public void Process(RegistrationRequest request)
{
if (!IsValid(request))
{
// TODO: report failure
}
// TODO: process further, e.g. store to database
myMediator.Publish(new RegistrationSucceededDomainEvent(request.User));
}
}
record RegistrationRequest(string User, string EMail);
record RegistrationSucceededDomainEvent(string UserId) : INotification;
The RegistrationSucceededDomainEvent
can be consumed by multiple receivers in different layers
or even different parts of the software system (bounded contexts) and so allows communication between
independent features. Examples or such receivers could be a service sending a welcome email or
a service collecting data for some “business analytics”.
Visualizing the dependencies between these classes results in the following picture:
The RegistrationApplicationService
clearly depends on MediatR due to its usage of the IMediator
interface. But also the RegistrationRequest
and even the RegistrationSucceededDomainEvent
(located in the domain layer) depends on MediatR due to the interfaces both classes implement.
Again, these dependencies clearly violate the Dependency Rule!
Now many software engineers would probably argue: “Hey, that’s not a big deal. MediatR does’t do any harm to the application layer or the domain layer. Let’s be pragmatic and do not over-engineer it!”
And I definitively see this point. It is important to be pragmatic and an over-engineered solution rarely is a good solution. In the end, software design is all about trade-off decisions and for small and medium sized projects the pragmatic approach of accepting MediatR library to be part of the application logic might be perfectly fine.
However, there are reasons why I personally would decide differently (probably because of being a software engineer in large and partially legacy software systems for years):
First, MediatR is a third-party library which we neither can influence how it will develop in future, e.g. regarding breaking changes nor whether it will even reach end-of-service or end-of-life soon. If you think this is very unlikely to happen, I would like to remind you of a few examples from a not so long back past where most of us probably have thought similar:
Second, suppose the requirements of our project change and we have to replace MediatR by some other library or technology. At the time of writing MediatR only supports in-process communication. What if we require out-of-process communication in some point of time?
Now you might argue that this wouldn’t be any problem at all because we could easily replace all MediatR interfaces by the interfaces of another library and adapt the APIs of the implementations easily. This is certainly true if we talk about 10 or even 20 “handlers” but what about 50 or 500? What about a code base with some hundreds of thousands or even millions of lines of code developed by many teams? In such a case this “simple replacement of interfaces” becomes a coordination nightmare easily …
Ok, assuming I could convince you that there are cases where the pragmatic approach is not the best option: Which options do we have to avoid violating the Dependency Rule?
The “classic” approach in Clean Architecture to integrate third-party libraries or external services without violating the Dependency Rule is the adapter pattern. The basic idea is: we define own interfaces inside the appropriate layer (e.g. application layer) and provide an implementation in the frameworks layer which “adapts” the third-party library to the project specific interface(s).
In case of MediatR there are three aspects for which we would have to provide an adapter:
The IMediator
(and related interfaces) which provides APIs for the caller to interact with MediatR.
Providing own interfaces and an implementation which adapts MediatR these interfaces is pretty
straight forward.
The markup interfaces like INotification
and IRequest
used by MediatR to identify messages.
It turns out, this would not be that easy. As these interfaces are used as markup interfaces only and do
not provide any APIs, a compromise could be to derive the custom interfaces from the MediatR interfaces.
Strictly speaking this would still violate the Dependency Rule but if we ever would have to exchange
these interfaces, this would only be necessary at a single place.
The handler interfaces like IRequestHandler
used by MediatR to locate the handlers of requests
and notifications. Providing adapters for these interfaces seems to be the most difficult part as
MediatR depends on those to locate the message handlers.
From these thoughts we can conclude: the “classic approach” of providing adapters is not that easy to realize in this case. Nevertheless, if you are interested in how such adapters could be implemented without violating the Dependency Rule then check out this video:
For the purpose of this article, let’s explore other options …
A convenient solution would be an alternative MediatR-like library which does not require such markup interfaces for messages and handlers.
Here are some interesting projects I have found:
But these and other projects I have found still require the application logic to implement certain library interfaces to locate message handlers.
I found oen MediatR fork which does not require markup interfaces for notifications and requests but also this library still requires interfaces for the request handlers.
Do you know a MediatR-like library which does not force the application logic to implement any library interfaces? Please share your discovery with the community and leave a comment below the article.
Finally, there is always one last option: solving the problem without MediatR at all. And immediately I hear you objecting: “Come on! Reinventing the wheel is not only a waste of time, it’s also adding unnecessary complexity to my project. That’s not a solution at all!”
Well, if I would suggest to completely reimplement MediatR in your project then I would certainly agree to your objection, but do you really need MediatR? Do the benefits outbalance the disadvantages?
As to my experience, often only a subset of the full functionality of a library or a framework is used. And in some cases a solution without such a third-party library would have even reduced the overall complexity of the software.
Remember the first example? An Asp.Net controller used MediatR to dispatch a request to some request handler. A simple alternative would be a request specific interface which would be implemented by the request handler and injected into the controller, e.g. via a DI container. This approach is easy to understand, makes it much easier to follow the control flow compared to the MediatR based solution and is still flexible with respect to exchanging the implementation.
In the second example MediatR was used to publish a domain event to an unknown number of handlers. One alternative could be again to create a domain event specific interface which is then used by the service to publish the event. The service could import a collection of objects implementing the interface or the composite pattern could be used to support one-to-many event notifications.
In cases where a many-to-many communication pattern is needed, a simple “event bus” implementation or adapter could be created with minimal effort and minimal complexity as shown in this video.
“And what about MediatR behaviors to handle cross-cutting concerns?” you may ask. Well, the “classic” design pattern to address cross-cutting concerns is the decorator pattern as demonstrated in this video. All we need are interfaces and a design following the Dependency Inversion principle which is exactly what I have sketched in this section.
So as you can see “reinventing the wheel” indeed is an option, especially if we do not really reinvent it but rather choose a pragmatic alternative design.
That was probably quite a controversial discussion. To summarize my perspective: the bigger the code base the more reluctant I would be violating the Dependency Rule.
What is your view on that topic? Pragmatism over following the Dependency Rule? Or vice versa? I would appreciate your contribution to this discussion.
The "Implementing Clean Architecture" series
The book starts with a quick tour on what BDD is really about and already there it gets clear that BDD is much more than just writing tests with GIVEN-WHEN-THEN phrases. The rest of the book is divided into four parts.
The first part is all about the business. It explains how to define the business goals (the “why”) and the capabilities (the “how”) of the system to be built. It continues explaining how to come up with features (the “what”) and how those can be broken down into stories and illustrated by concrete examples. The emphasis is thereby always on communication and collaboration between business/domain experts and the team. In essence, it is all about “building the right system”.
The second part describes how features and examples are turned into acceptance criteria and how those are turned into “executable specification”. The key message is: “Don’t write tests, write executable specifications”. To achieve this, this part gives an overview on Gherkin and how to write good BDD scenarios.
The third part focuses on how to automate the scenarios written in Gherkin. It explains design aspects as well as how to use different automation frameworks in different programming languages. If finally shows that the idea of “executable specifications” is not limited to automating (business) acceptance criteria but that it can also be beneficial when being used for describing and verifying lower level technical aspects.
The last part of the book shows how BDD can even support project management to track the progress of the project and how it can be integrated into any CI/CD pipeline.
In summary, BDD is not just about writing tests using GIVEN-WHEN-THEN. Instead, it is what the name already indicates: a complete software development approach.
If you want to get the maximum benefit out of BDD for your project, this book is worth reading, even if you have some practical experience with Gherkin and BDD tools already.
]]>Do you have goals you want to achieve but your are not making big progress yet?
If you want better results, then forget about setting goals. Focus on your system instead.
– James Clear, Atomic Habits
This is the book you want to read next:
No matter what your goals are, no matter what you want to change or what you want to get better at, this book is definitively going to help you a lot!
The problem with a goals-first mentality is that you’re continually putting happiness off until the next milestone. […] A systems-first mentality provides the antidote. When you fall in love with the process rather than the product, you don’t have to wait to give yourself permission to be happy. […] True long-term thinking is goal-less thinking. […] Ultimately, it is your commitment to the process that will determine your progress.
– James Clear, Atomic Habits
After having set the foundation of behavior, change and habits the main content is structured into four chapters, the “four rules of behavioral change”.
The rules to create a good habit are:
The rules to break a bad habit are:
Each chapter provide easy to understand explanation what the rule is about including examples the reader can relate to as well as techniques to implement the rule daily life.
One such powerful and easy to implement technique is “habit stacking” which is a strategy to pair a new habit with an existing one.
Example: After I got my second coffee, I perform all open code reviews.
“Temptation bundling” is a technique to make a habit more attractive. The strategy is to pair an action you want to do with an action you need to do.
Example: After I have reviewed the next chapter of the developer manual (need), I try out the cool, new library I have found (want).
Reframing your habits to highlight their benefits rather than their drawbacks is a fast and lightweight way to reprogram your mind and make a habit more attractive.
– James Clear, Atomic Habits
If you want to master a habit, the key is to start with repetition, not perfection. […] Master the habit of showing up. The truth is, a habit must be established before it can be improved.
– James Clear, Atomic Habits
The only way to become excellent is to be endlessly fascinated by doing the same thing over and over. You have to fall in love with boredom.”
– James Clear, Atomic Habits
I hope you’ve got motivated! Now get a copy of that awesome book, start reading, start changing!
]]>