Clean architecture with C#/.Net Core and MediatR - Part 3

The MediatR Library

Clean Architecture is an Interface design. Everything is connected via Interfaces and their hidden Implementations. A request comes from the Framework layer to the Business handler via a Business interface. A request from the Business circle to the database or other services is also activated using an Adapter interface.

Mediator pattern and MediatR library are just another way to write Interfaces and Implementations (via the TRequest/TResponse type). In fact, you can simply define your own interface and attach the corresponding implementation through your IOC container. However, the main reason I use MediatR is because of its excellent dynamic pipeline behavior, which helps me separate most of the cross-cutting concerns out of the main handler, makes everything cleaner and produces a concise, testable handler class.

A very simple handler in MediatR looks like this

public class InsertUser
{
    /// <summary>
    /// Insert a new User and return that User
    /// </summary>
    public class Request : IRequest<Response>
    {
        public int ClientId { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
    }

    public class Response
    {
        public User User { get; set; }
    }

    public class Handler : IRequestHandler<Request, Response>
    {
        public async Task<Response> Handle(Request request, CancellationToken cancellationToken)
        {
            // implement your logic here
            await CheckExistence(request.UserName);

            // implement your logic here
            var user = await SomeFunction(cancellationToken);

            return new Response
            {
                User = user
            }
        }
    }
}

In this simplest form, it doesn’t look so different from the way we usually do with a normal Interface. However, let’s imagine what will happen when you want to add these requirements

  • Log the related information to debug later.
  • Track the process metrics to monitor and analyze performance.
  • Lock the process to avoid race-condition.
  • Transform the request/response format in a pre-defined way.
  • Handle errors.
  • Other cross-cutting concerns?…
  • A more important question: How to group related requests and re-use these cross-cutting concern handlers?
Read more

Clean architecture with C#/.Net Core and MediatR - Part 2

3. Runtime Layer

The Runtime Layer contains nearly no logic. It is simply a place to bootstrap the application and register all the necessary dependencies. It acts as a gateway for inputting the data to the Business Flows and transfer the output back to the caller. That means, your Business Flows can be embedded into any Runtime type, from an HTTP API Server to a Worker that processes data from a Message Queues or a onetime Script,… Here are some examples how they should look like

HTTP Server

Http Runtime

For HTTP Server, the APIs simply transform the data from the deserializable format (the HTTP Request) to the Business Flow input and then serialize the output to send back to the client.

In case you use ASP.Net Core and Autofac (like me)…

public class Startup
{
    // ...other methods

    /// <summary>
    /// Autofac method
    /// </summary>
    /// <param name="builder"></param>
    public void ConfigureContainer(ContainerBuilder builder)
    {
        builder.RegisterModule<Truongtx.Business.AutofacModule>();
        builder.RegisterModule<Truongtx.Adapter.AutofacModule>();
    }
}

[ApiController]
public class NpsController : ControllerBase
{
    private readonly Business.ISendMarketingEmails _sendMarketingEmails;

    public NpsController(Business.ISendMarketingEmails sendMarketingEmails)
    {
        _sendMarketingEmails = sendMarketingEmails;
    }

    /// <summary>
    /// Send Marketing Emails for a Campaign
    /// </summary>
    /// <param name="marketingCampaignId"></param>
    /// <returns></returns>
    [Route("/api/marketing-campaigns/{marketingCampaignId}/send-emails")]
    [HttpPost]
    public Task<string> SendMarketingEmails(int marketingCampaignId)
        => _sendMarketingEmails.Execute(marketingCampaignId);

    // ... other APIs
}
Read more

Clean architecture with C#/.Net Core and MediatR - Part 1

2. Adapter Layer

The Business layer mentioned before contains a list of interfaces to connect to other external dependencies (external services, database storage). It doesn’t care what database system to use or what protocol the external services need. All those logic will be implemented in this Adapter layer.

Adapter Code

An implementation may look like this

public class GetContactsByMarketingCampaignId : IGetContactsByMarketingCampaignId
{
    private readonly IMapper _mapper;

    public GetContactsByMarketingCampaignId(IMapper mapper)
    {
        _mapper = mapper;
    }

    public IList<Business.Contact> Execute(int marketingCampaignId)
    {
        // get from Redis cache and then fallback to SQL
        var contacts = GetFromRedis(marketingCampaignId) ?? GetFromSql(marketingCampaignId);

        // use AutoMapper to map back to Business model
        return mapper.Map<IList<Business.Contact>>(contacts);
    }

    private IList<SqlModels.Contact> GetFromRedis(int marketingCampaignId)
    {
        // logic to get from redis here
        ...
    }

    private IList<SqlModels.Contact> GetFromSql(int marketingCampaignId)
    {
        // logic to get from sql here
        ...
    }
}
Read more

Okay, I’m porting some modules from Nodejs to C# and I couldn’t find any built-in modules or libraries to do this so I had to implement it manually, luckily, with the help from Stackoverflow.

I have a message that was encrypted using crypto-js and stored in the database. Here is the Nodejs code that generates the encrypted data

const cryptojs = require('crypto-js');
const encryptedMsg = cryptojs.AES.encrypt('message', 'secret').toString();

The result is a string that looks like this

U2FsdGVkX184KJolbrZkg8w+rX/V9OW7sbUvWPVogdY=

Now, I need to read it back in C# and decrypt it to get the original message. The built-in Aes class in C# requires a Key and an IV to be explicitly passed in but there is no utility to generate the Key and the IV from a specified string. The above encrypt method from crypto-js is a simplified and implicit version of the Key and the IV. It doesn’t play well with C# and actually is not the AES standard (crypto-js still allows you to pass in the Key and IV explicitly).

For AES Cipher Algorithm, we need a Key and an IV (Initialization Vector) to add randomness to the encrypted data.

After playing around with crypto-js code base and with the help from Stackoverflow, I finally figured out how the data is stored and how the Key/IV are generated. In order to derive a key from the passphrase, it uses the OpenSSL-compatible derivation function EVP_BytesToKey. Here are the steps

  • Generate a random 8byte salt.
  • Use it along with the input passphrase to generate the Key and the IV.
  • The Key and the IV are then fed into AES function to produce the ciphertext.
  • The final result is a base64-encoded string containing the Salted__ string at the beginning followed by the 8byte salt and the actual ciphertext.
Read more

Nodejs has been a pain point in our code base for years. It used to be the best choice when we started building our product but I have never considered it as a good solution for scaling. I have been trying to find a better language and a better architecture which can help the team scale more in the future. I finally decided to go with C# and Clean Architecture. They are not the best one, but at least they fit for the existing tech stack of the organization.

I will have another series talking about the mistakes in designing application from my experience (which is also related the Nodejs code base). In this post, I’m going to summarize how I built the new architecture using Clean Architecture with C# and the advantages of MediatR to make it really clean.

Clean Architecture revisit

You may have already seen the famous Clean Architecture circle diagram many times before. It’s a bit complicated for me so I will make it simple by just drawing these 3 circles.

Reference

Each circle is represented by a Project in C#. The outer one references to the inner one, not the reverse way. The inner one should have no realization of the outer framework that it runs on top.

Read more

Part 5 Scaling the System at AR - Part 5 - Message Queue for Scaling team

If you have read some of my previous blog post, you may know that we have been stuck with Rethinkdb for years. Rethinkdb was good database. However, the development was stopped some years ago and there is no sign that it will be continued in the future. We have been following some very active guys in the community and even thought about donating for them. However, all of them have lost their interest in Rethinkdb and decided to move forward with other alternative solutions. Also, as I have already mentioned before in Mistakes of a Software Engineer - Favor NoSQL over SQL, most of our use cases are not suitable with the design of Rethinkdb anymore and all the optimizations that we made are reaching their limit.

After several discussions and analysis, we decided to move away from Rethinkdb to MS SQL Server. Some requirements that we have to satisfy are

  • The user should be able to view and edit the data normally, without any downtime.
  • There should be a backup plan for it.
  • There should be an experimental period, where we can pick some users, turn on the new database and analyze the correctness of the data.

This can be achieved easily using the Pub/Sub model and Message Queue design described in Scaling the System at AR - Part 5 - Message Queue for Scaling team.

Flow

Read more

Part 4 Scaling the System at AR - Part 4 - Message Queue at AR

The problem of scaling team

As the product grows bigger and bigger, one team is not capable for developing and maintaining the whole system anymore. The organization won’t scale if we operate using a top-down style, where one Production Owner or Tech Leader dictates the decision of all teams. Sooner or later it will come a big monolith application, where the changes from one place can cause cascading effect to all the other teams.

The philosophy that we use for scaling the product is to split it into smaller ones, building multiple autonomous teams, each team is responsible for one (or more) business domain. Each individual team is free to experiment new technology and choose the tech stack that is most suitable for them without impacting the other teams.

There are, of course, plenty of difficulties when splitting team

  • How to make those teams operate independently?
  • How can we make sure the error in one team doesn’t cause cascading effect to the other team?
  • How does one team access the data that another team manages?
  • How does one team update the data that belongs to another team?
  • How can one team encapsulate the underlying business logic from the other teams?
  • How do we maintain the consistency across teams?
  • What will happen if we add one (or more team) in the future?

I won’t talk much about the process, operating or management skill here, just because I’m not an expert on that. I will just show some technical techniques related to Message Queue that we used to support the ultimate goal of splitting and building small autonomous teams.

How about an API Server for each team?

One simple way that people usually think of is to expose an API Server/Gateway to the other teams in the company. An API Server can help

  • Limit the permission of other teams/services on the current team resources, allow the other teams to read/write certain entities only.
  • Prevent the other teams to cause unwanted effect on the internal data structure of the current team.
  • Abstract away the complex underlying business logic of the team.
  • Each team can choose its own technology stack, no matter if they follow a Monolith or Microservice design, no matter if they use Nodejs or C#, as long as the API is a standard one (usually HTTP API).
Read more

Sau ngót nghét 10 năm đi xe Wave, lần đầu tiên lên đời mô tô côn tay. Hứng lên làm bài blog post khoe xe thôi chứ chả có gì. 🤣

Thực ra cũng thích xe côn tay từ lâu. Ngày xưa học lái ô tô trước, thấy có côn số full bộ. Sau đó mới được học cách đi xe máy xe số, tự dưng thấy thiếu thiếu 😂. Hồi đó đã định mua Exciter 135 mà sau đó để tiền mua xe Wave + laptop để nam tiến học đại học. Cũng may ngày đó không mua, chứ không giờ tụi nó bảo mình đi trộm chó 😂

10 năm đã trôi qua, tới lúc lên đời cái xe nào lạ hơn con Wave đang đi rồi. Trước tết Trường đã ngắm con MT-15 rồi, nhận thưởng tết xong là xúc thôi

Lần đầu tiên đi xem em nó ngoài tiệm, thiết kế khá là đẹp

Img1

Read more

Part 2 Mistakes of a Software Engineer - Favor NoSQL over SQL - Part 2

Is that all problems that I have with NoSQL?

No. But I’m lazy now 😂. So I won’t talk about the problems anymore.

Choosing a Database system with a Product perspective

So, how do I choose a database system from my Product engineer view?

The answer is: It depends. (of course 😂)

The philosophies of the 2 database systems are different.

NoSQL is designed for simple operations with very high read/write throughput. It best fits if you usually read/write the whole unstructured object using very simple queries. There are some specific use cases that you can think about

  • Receive data from another system (an integration API for example): You can use a NoSQL database as a temporary storage to increase API response time and safety by deferring data processing to a later time. Read more Scaling the System at AR - Part 2 - Message Queue for Integration
  • A backing service to store Message Values for a Message Queue application
  • A caching database where you know exactly what you need and you can query easily by key value (but even in this case, think of other solution in SQL first, like materialized views, triggers,…).
  • Others?
Read more

Part 1 Mistakes of a Software Engineer - Favor NoSQL over SQL - Part 1

There is nothing called Schemaless

One argument that people usually make about NoSQL is the flexibility in schema designing. They are Schemaless databases and very similar to objects in programming language. The dynamic schema is very powerful and can support several different use cases, not limited like what SQL provides. You can simply throw your objects into it and do anything you want.

Is it really true? For me, it’s NOT.

There is nothing called Schemaless. The schema just moves from Database level to Application level. For Document databases, you are actually using implicit data schema model (compare to explicit data schema in SQL). Instead of letting the database handle the schema and datatype itself, you have to do it yourself in your application code.

  • Some databases support simple schema validation. However, none of them are as optimal as what SQL provides. You are limited to just some basic data types (string, number, boolean,…) while SQL offers a wide range of different data types for different use cases (for example: varchar, nvarchar text,… just for string data). Working with strict data type is always easier and more performant.
  • In case you handle the schema yourself, it may be ok if you use a static-typed programming language (like C#). However, if you are developing your application using Nodejs, Ruby, PHP,… (which is quite common for Startup companies), you have to maintain another layer of schema validation in your application because the data is just runtime objects.

Schema migration is also another problem. NoSQL databases encourage you to just throw the objects into and not care much about the schema. Some databases don’t even require you to create any table, simply write the data and the tables will be created automatically. When you read/write the data, it is hard to know which version the data is on. You will have to add many if statements just to check for what should have been provided. No matter how careful you design your application, how strict your coding conventions are, there will always be the case your code works with outdated data schema. For critical business workflows, I want everything to be expressed clearly, not implicitly.

Read more