Using ElasticSearch, Kibana, ASP.NET Core and Docker to Discover and Visualize data

Posted by: Daniel Jimenez Garcia , on 3/27/2017, in Category ASP.NET Core
Views: 95190
Abstract: Use the Elastic Search API in an ASP.NET Core and Docker project and combining it with applications like Kibana for data analysis, reporting and visualization.

Can you easily perform queries over your data in many different ways, perhaps in ways you have never anticipated? Are you able to visualize your logs in multiple ways while supporting instant filtering based on time, text and other types of filters?

These are just 2 examples of what can be easily achieved with the Elastic stack in a performant and scalable way.

Are you keeping up with new developer technologies? Advance your IT career with our Free Developer magazines covering C#, Patterns, .NET Core, MVC, Azure, Angular, React, and more. Subscribe to our Magazines for FREE and download all previous, current and upcoming editions.

In this article, I will introduce the popular search engine Elasticsearch, its companion visualization app Kibana and show how .Net Core can easily be integrated with the Elastic stack.

elastic-search-dotnet-core

Figure 1, Elasticsearch and .Net

We will start exploring Elasticsearch through its REST API, by indexing and querying some data. Then we will perform a similar exercise using the official Elasticsearch .Net API. Once familiarized with Elasticsearch and its APIs, we will create a logger which can be plugged within .Net Core and which sends data to Elasticsearch. Kibana will be used along the way to visualize the data indexed by Elasticsearch in interesting ways.

I hope you will find the article interesting enough to leave you wanting to read and learn more about the powerful stack that Elastic provides. I certainly think so!

This article assumes a basic knowledge of C# and of REST APIs. It will use tools like Visual Studio, Postman and Docker but you could easily follow along with alternatives like VS Code and Fiddler.

Elasticsearch - Brief introduction

Elasticsearch at its core is a document store with powerful indexing and search capabilities exposed through a nice REST API. It is written in Java and based internally on Apache Lucene, although these details are hidden beneath its API.

Any document stored (or indexed) can get its fields indexed - fields which automatically can be searched for and aggregated in many different ways.

But ElasticSearch doesn’t stop at just providing a powerful search of these indexed documents.

It is fast, distributed and horizontally scalable, supporting real time document store and analytics with clusters supporting hundreds of servers and petabytes of indexed data. It also sits at the core of the Elastic stack (aka ELK), which provides powerful applications like LogStash, Kibana and more.

Kibana specifically provides a very powerful querying and visualization web application on top of Elasticsearch. Using Kibana, it is very easy to create queries, graphs and dashboards for your data indexed in Elasticsearch.

Elasticsearch exposes a REST API and you will find many of the documentation examples as HTTP calls, which you could try using tools like curl or postman. Of course, clients for this API have been written in many different languages including .Net, Java, Python, Ruby and JavaScript amongst others.

If you want to read more, the official elastic website is probably the best place to start.

Docker, the easiest way of getting up and running locally

During this article, we will need to connect to an instance of Elasticsearch (and later Kibana). If you already have one running locally or have access to a server that you can use, that’s great. Otherwise you will need to get one.

You have the option of downloading and installing Elasticsearch and Kibana either in your local machine or a VM/server you can use. However, I would suggest using Docker as the simplest and cleanest way for you to explore and play with Elasticsearch and Kibana.

You can simply run the following command and get the container up and running which contains both Elasticsearch and Kibana.

docker run -it --rm -p 9200:9200 -p 5601:5601 --name esk nshou/elasticsearch-kibana
  • -it means starting the container in interactive mode with a terminal attached.
  • --rm means the container will be removed as soon as you exit from the terminal.
  • -p maps a port inside the container with a port in the host
  • --name gives the container a name in case you don’t use --rm and prefer to manually stop/remove
  • nshou/elasticsearch-kibana is the name of an image in Docker Hub that someone already prepared with Elasticsearch and Kibana inside
  • If you prefer so, you can start it in the background using the argument -d instead, and manually stopping/removing it using docker stop esk and docker rm esk.

Running multiple applications in the same container similar to what we are doing is great for trying them locally and for the purposes of this article, but isn’t the recommended approach for production containers!

You should also be aware that your data would be gone once you remove the container (as soon as you stop it if you used the --rm option). While this is fine for experimenting, on real environments, you don’t want to lose your data, so you will follow patterns like the “data container” instead.

Docker is a great tool and I would encourage you to learn more about it especially if you want to use it for something more serious than just following the article and quickly trying Elasticsearch locally. I have a nice introduction to docker for .Net Core in my previous article Building DockNetFiddle using Docker and .NET Core.

Simply open http://localhost:9200 and http://localhost:5601 and you will see both Elasticsearch and Kibana ready to use. (If you use docker toolbox, replace localhost with the ip of the VM hosting docker which you can get running docker-machine env default on the command line).

docker-elasticsearch

Figure 2, Elasticsearch up and running in Docker

 

firing-kibana

Figure 3, Kibana also ready to go

Indexing and querying documents in Elasticsearch

Before we start writing any .Net code, let’s try out some basic features of our new and shiny environment. The objective will be to index some documents (akin to storing data) which will be analysed by Elasticsearch, allowing us to run different queries over them.

Here I am going to use Postman to send HTTP requests to our Elasticsearch instance, but you could use any other similar tool like Fiddler or curl.

The first thing we are going to do is to ask Elasticsearch to create a new index and index a couple of documents. This is similar to storing data in a table/collection, the main difference (and purpose!) being that the Elasticsearch cluster (a single node in a simple docker setup) will analyze the document data and make it searchable.

Indexed documents in Elasticsearch are organized in indexes and types. In the past, this has been compared to DBs and tables, which has been confusing. As the documentation states, indexes and types are closely related to the way the data is distributed across shards, and indexed by Lucene. In short there is a penalty for using and querying multiple indexes, so types can be used to organize data within a single index.

Send these two requests in order to create an index and insert a document in that index (remember, if you use docker toolbox then use the ip of the VM hosting docker instead of localhost):

· Create a new index named “default”. Indexes

PUT localhost:9200/default

· Index a document in the “default” index. Notice we need to tell which type of document are we storing (product) and the id of that document (1, although you could use any value as long as it is unique)

PUT localhost:9200/default/product/1
{ 
    "name": "Apple MacBook Pro",
    "description": "Latest MacBook Pro 13",
    "tags": ["laptops", "mac"]
}

creating-index

Figure 4, creating a new index

indexing-documents

Figure 5, indexing a new document

Before we move on and verify we can retrieve and query our data, index a few more products. Try to use different tags like desktops and laptops and remember to use different ids!

Once you are done, let’s retrieve all the indexed documents ordered by their names. You can either use the query string syntax or a GET/POST with body, the following two requests being equivalent:

GET http://localhost:9200/default/_search?q=*&sort=name.keyword:asc
 
POST http://localhost:9200/default/_search
{
  "query": { "match_all": {} },
  "sort": [
    { "name.keyword": "asc" }
  ]
}

Let’s try something a bit more interesting like retrieving all the documents which contain the word “latest” in their description and the word “laptops” in the list of tags:

POST http://localhost:9200/default/_search
{
  "query": { 
      "bool": {
      "must": [
        { "match": {"description": "latest"} },
        { "match": { "tags": "laptops" } }
      ]
    }
  },
  "sort": [
    { "name.keyword": "asc" }
  ]
}

querying-documents

Visualizing data in Kibana

We are going to take a quick look at Kibana and scratch its surface in this final part of the introduction.

Assuming you have inserted a few documents while following the previous session, open your docker instance of Kibana in http://localhost:5601. You will notice that Kibana asks you to provide the default index pattern, so it knows which Elasticsearch indexes it should use:

· Since we have created a single index named “default” in the previous session, you can use “default*” as the index pattern.

· You will also need to unselect the option “Index contains time-based events” since our documents do not contain any time field.

index-pattern-kibana

Figure 7, adding the index pattern in Kibana

Once you have done that, open the “Discover” page using the left side menu and you should see all the latest documents inserted in the previous section. Try selecting different fields, entering search term in the search bar or individually applying filters:

visualizing-data-kibana

Figure 8, visualizing documents in Kibana

Finally, let’s create a pie chart showing the percentage of products which are laptops or desktops. Go to the Visualize page using the left side menu and create a new “Pie Chart” visualization using the index pattern created before.

You will land on a page where you can configure the pie chart. Leave “Count” as the slice size and select “split slices” in the buckets section. Select “filters” as the aggregation type and enter 2 filters for tags="laptops" and tags="desktops". Click run and you will see something similar to the following screenshot:

graphs-in-kibana

Figure 9, creating a pie chart in Kibana

Make sure you try to enter a search term in the search bar and notice how your visualization changes and includes just the filtered items!

Elasticsearch .Net API

After a brief introduction to Elasticsearch and Kibana, let’s see how we can index and query our documents from a .Net application.

You might be wondering why would you want to do this instead of directly consuming the HTTP API. I can provide a few reasons and I am sure you will be able to find a few on your own:

  • You don’t want to directly expose your Elasticsearch cluster in the open.
  • Elasticsearch might not be your main database and you might need to merge or hydrate the results from the main source.
  • You want to include data stored/generated server side within the documents indexed.

The first thing you will notice when you open the documentation is that there are two official APIs for .Net: Elasticsearch.Net and NEST, both supporting .Net Core projects.

  • Elasticsearch.Net provides a low-level API for connecting with Elasticsearch and leaves to you the work of building/processing the requests and responses. It is a very thin client for consuming the HTTP API from .Net
  • NEST sits on top of Elasticsearch.Net and provides a higher-level API. It can map your objects to/from Request/Responses, make assumptions about index names, document types, field types and provide a strongly typed language for building your queries that matches the one of the HTTP REST API.

elasticsearch-dotnet-apis

Figure 10, Elasticsearch .Net APIs

Since I am going to use NEST, the first step would be to create a new ASP .Net Core application and install NEST using the Package Manager.

Start indexing data with Nest

We are going to replicate some of the steps we took while manually sending HTTP requests in the new ASP.Net Core application. If you want, restart the docker container to clean the data or manually delete documents/indexes using the HTTP API and Postman.

Let’s start by creating a POCO model for the products:

public class Product
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public string Description { get; set; }
    public string[] Tags { get; set; }        
}

Next let’s create a new controller ProductController with a method to add a new product and a method to find products based on a single search term:

[Route("api/[controller]")]
public class ProductController : Controller
{

    [HttpPost]
    public async Task< IActionResult > Create([FromBody]Product product)
    {
    }

    [HttpGet("find")]
    public async Task< IActionResult > Find(string term)
    {
    }
}

In order to implement any of these methods, we are going to need a connection to Elasticsearch. This is done instantiating an ElasticClient with the right connection settings. Since this class is thread-safe, the recommended approach is to use it as a singleton in your application instead of creating a new connection per request.

For the purposes of brevity and clarity I am going to now use a private static variable with hardcoded settings. Feel free to use dependency injection and configuration/options frameworks in .Net Core or check the companion code in Github.

As you can imagine, at the very minimum, you will need to provide the URL to your Elasticsearch cluster in the connection settings. Of course, there are additional optional parameters for authenticating with your cluster, setting timeouts, connection pooling and more.

private static readonly ConnectionSettings connSettings =
    new ConnectionSettings(new Uri("http://localhost:9200/"));        
private static readonly ElasticClient elasticClient = 
    new ElasticClient(connSettings);

Once a connection is established, indexing documents is as simple as using the Index/IndexAsync methods of ElasticClient:

[Route("api/[controller]")]
public class ProductController : Controller
{

    [HttpPost]
    public async Task<IActionResult> Create([FromBody]Product product)
    {
    }

    [HttpGet("find")]
    public async Task<IActionResult> Find(string term)
    {
    }
}

Simple, isn’t? Unfortunately, if you send the following request with Postman you will see the code fails.

POST http://localhost:65113/api/product
{ 
    "name": "Dell XPS 13",
    "description": "Latest Dell XPS 13",
    "tags": ["laptops", "windows"]
} 

This is happening because NEST isn’t able to determine which index should be used when indexing the document! If you remember when manually using the HTTP API, the URL specified the index, document type and id of the document as: localhost:9200/default/product/1.

NEST is able to infer the type of the document (using the class name) and can also default how fields will be indexed (based on the field types), but needs a bit of help with the index names. You can specify a default index to be used when one cannot be determined, and specific index name for specific types.

connSettings = new ConnectionSettings(new Uri("http://192.168.99.100:9200/"))
    .DefaultIndex("default")
    //Optionally override the default index for specific types
    .MapDefaultTypeIndices(m => m
        .Add(typeof(Product), "default"));

Try again after making these changes. You will see how NEST is now able to create the index if it wasn’t already there, and the document is indexed. If you switch to Kibana, you should also be able to see the document. Notice how NEST:

  • Inferred the document type from the class name, Product.
  • Inferred the Id as the Id property of the class.
  • Included every public property in the document sent to Elasticsearch.

indexing-with-nest

Figure 11, document indexed using NEST

Before we move into querying data, I want to revisit the way indexes are created.

How are Indexes Created

Right now, we rely on the fact that the index will be created for us if it doesn’t exist. However, the way fields are mapped in the index is important and directly defines how Elasticsearch will index and analyse these fields. This is particularly obvious with string fields since Elasticsearch v5 provides two different types “Text” and “Keyword”:

  • Text fields will be analyzed and split into words so they can tap into the more advanced search features of Elasticsearch
  • On the other hand, Keyword fields will be taken “as is” without being analyzed and can only be searched by their exact values.

You can annotate your POCO models with attributes that NEST can use to generate more specific index mappings:

public class Product
{
    public Guid Id { get; set; }
    [Text(Name="name")]
    public string Name { get; set; }
    [Text(Name = "description")]
    public string Description { get; set; }
    [Keyword(Name = "tag")]
    public string[] Tags { get; set; }        
}

However now that we want to provide index mappings, we have to manually create and define the mappings ourselves using the ElasticClient API. It is very straightforward, especially if we are just using the attributes:

if (!elasticClient.IndexExists("default").Exists)
{
    elasticClient.CreateIndex("default", i => i
        .Mappings(m => m
            .Map<Product>(ms => ms.AutoMap())));
}

Send a request directly to Elasticsearch (GET localhost:9200/default) and notice how the mapping settings for name and description fields are different from the one for the tags field.

index-mapping-created-nest

Figure 12, index mapping created with NEST

Querying data with Nest

Right now, we have a ProductController that can index products in Elasticsearch using NEST. It is time we implement the Find action of the controller and use NEST to query the documents indexed in Elasticsearch.

We are going to implement a simple search with a single term. It will look at every field and you will notice how:

  • Fields mapped as “Text” were analysed. You will be able to search for specific individual words inside the name/description fields
  • Fields mapped as “Keywords” were taken as-is and not analysed. You will only be able to search for exact matches of the tags.

NEST provides a rich API for querying Elasticsearch that maps with the standard HTTP API. Implementing the type of query described above is as simple as using the method Search/SearchAsync and building a SimpleQueryString as argument.

[HttpGet("find")]
public async Task<IActionResult> Find(string term)
{
    var res = await elasticClient.SearchAsync<Product>(x => x
        .Query( q => q.
            SimpleQueryString(qs => qs.Query(term))));
    if (!res.IsValid)
    {
        throw new InvalidOperationException(res.DebugInformation);
    }

    return Json(res.Documents);
}

Test your new action using Postman:

querying-using-nest

Figure 13, testing the new action using NEST for querying Elasticsearch

As you might have already realized, our action behaves exactly the same as when manually sending the following request to Elasticsearch:

GET http://localhost:9200/default/_search?q=*&

Creating an Elasticsearch .Net Core Logger Provider

Now that we have seen some of the basics of NEST, let’s try something a bit more ambitious. Since we have created an ASP.Net Core application, we can take advantage of the new logging framework and implement our own logger that sends information to Elasticsearch.

The new logging API differentiates between the logger and the logger provider.

  • A logger is used by client code, like a controller class, to log information and events.
  • Multiple logger providers can be added and enabled for the application. These will register the logged information/events and can be configured with independent logging levels from Trace to Critical.

The framework comes with a number of built-in providers for the console, event log, Azure and more, but as you will see, creating your own isn’t complicated. For more information check the Logging article of the official documents.

In the final sections of the article, we will create a new logger provider for Elasticsearch, enable it in our application and use Kibana to view the logged events.

Add a new logger provider for Elasticsearch

The first thing to do is define a new POCO object that we will use as the document to index using NEST, similar to the Product class we created earlier.

This will contain the logged information, optional information about any exception that might have happened and relevant request data. Logging the request data will come in handy so we can query/visualize our logged events in relation to specific requests.

public class LogEntry
{
    public DateTime DateTime { get; set; }
    public EventId EventId { get; set; }
    [Keyword]
    [JsonConverter(typeof(StringEnumConverter))]
    public Microsoft.Extensions.Logging.LogLevel Level { get; set; }
    [Keyword]
    public string Category { get; set; }
    public string Message { get; set; }

    [Keyword]
    public string TraceIdentifier { get; set; }
    [Keyword]
    public string UserName { get; set; }        
    [Keyword]
    public string ContentType { get; set; }
    [Keyword]
    public string Host { get; set; }         
    [Keyword]
    public string Method { get; set; }        
    [Keyword]
    public string Protocol { get; set; }
    [Keyword]
    public string Scheme { get; set; }
    public string Path { get; set; }
    public string PathBase { get; set; }
    public string QueryString { get; set; }
    public long? ContentLength { get; set; }
    public bool IsHttps { get; set; }
    public IRequestCookieCollection Cookies { get; set; }
    public IHeaderDictionary Headers { get; set; }

    [Keyword]
    public string ExceptionType { get; set; }        
    public string ExceptionMessage { get; set; }
    public string Exception { get; set; }
    public bool HasException { get { return Exception != null; } }
    public string StackTrace { get; set; }
}

The next step is implementing the ILogger interface on a new class. As you can imagine, this will receive the logged data, map it to a new LogEntry object and use the ElasticClient to index the document in Elasticsearch.

· We will use an IHttpContextAccessor so we can get the current HttpContext and extract the relevant request properties.

I am not going to copy the code again to connect with Elasticsearch and create the index, as there is nothing new compared to what we did earlier. Either use a different index or delete the index with the products indexed in the previous section.

Note: You can check the companion code in Github for an approach using dependency injection and configuration files.

The main method to implement is Log< TState > which is where we create a LogEntry and index it with NEST:

public void Log< TState >(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func< TState, Exception, string > formatter)
{
    if (!IsEnabled(logLevel)) return;

    var message = formatter(state, exception);
    var entry = new LogEntry
    {
        EventId = eventId,
        DateTime = DateTime.UtcNow,
        Category = _categoryName,
        Message = message,
        Level = logLevel
    };

    var context = _httpContextAccessor.HttpContext;
    if (context != null)
    {                
        entry.TraceIdentifier = context.TraceIdentifier;
        entry.UserName = context.User.Identity.Name;
        var request = context.Request;
        entry.ContentLength = request.ContentLength;
        entry.ContentType = request.ContentType;
        entry.Host = request.Host.Value;
        entry.IsHttps = request.IsHttps;
        entry.Method = request.Method;
        entry.Path = request.Path;
        entry.PathBase = request.PathBase;
        entry.Protocol = request.Protocol;
        entry.QueryString = request.QueryString.Value;
        entry.Scheme = request.Scheme;

        entry.Cookies = request.Cookies;
        entry.Headers = request.Headers;
    }

    if (exception != null)
    {
        entry.Exception = exception.ToString();
        entry.ExceptionMessage = exception.Message;
        entry.ExceptionType = exception.GetType().Name;
        entry.StackTrace = exception.StackTrace;
    }

    elasticClient.Client.Index(entry);
}

You also need to implement the additional BeginScope and IsEnabled methods.

  • Ignore BeginScope for the purposes of this article, just return null.
  • Update your constructor so it receives a LogLevel, then implement IsEnabled returning true if the level being compared is greater than or equal to the one received in the constructor.
public bool IsEnabled(LogLevel logLevel)
{
    return logLevel >= _logLevel;
}

With ILogger implemented, create the new ILoggerProvider. This class is responsible for creating instances of ILogger for a specific category and level. It will also receive the configured settings which maps each category to its logging level.

What’s the category you might ask? This is a string that identifies which part of your system logged the event. By default, every time you inject an instance of ILogger< T >, the category is assigned by default as the type name of T. For example, acquiring an ILogger< MyController > and using it to log some events means those events will have “MyController” as the category name.

This can come in handy for e.g. to set different verbose levels for individual categories to filter/query the logged events and many more usages that I am sure you can think of.

The implementation of this class would look like the following:

public class ESLoggerProvider: ILoggerProvider
{
    private readonly IHttpContextAccessor _httpContextAccessor;
    private readonly FilterLoggerSettings _filter;

    public ESLoggerProvider(IServiceProvider serviceProvider, FilterLoggerSettings filter = null)
    {
        _httpContextAccessor = serviceProvider.GetService<IHttpContextAccessor>();
        _filter = filter ?? new FilterLoggerSettings
        {
            {"*", LogLevel.Warning}
        };
    }

    public ILogger CreateLogger(string categoryName)
    {
        return new ESLogger(_httpContextAccessor, categoryName, FindLevel(categoryName));
    }

    private LogLevel FindLevel(string categoryName)
    {
        var def = LogLevel.Warning;
        foreach (var s in _filter.Switches)
        {
            if (categoryName.Contains(s.Key))
                return s.Value;

            if (s.Key == "*")
                def = s.Value;
        }

        return def;
    }

    public void Dispose()
    {
    }
}

Finally let’s create an extension method that can be used to register our logger provider in the Startup class:

public static class LoggerExtensions
{
    public static ILoggerFactory AddESLogger(this ILoggerFactory factory, IServiceProvider serviceProvider, FilterLoggerSettings filter = null)
    {
        factory.AddProvider(new ESLoggerProvider(serviceProvider, filter));
        return factory;
    }
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"))
        .AddDebug()
        .AddESLogger(app.ApplicationServices, new FilterLoggerSettings
        {
            {"*", LogLevel.Information}
        });
    …
}

Notice how I am overriding the default settings to assign a logging level of Information to every category. This is done so that we can easily index some events on Elasticsearch for every request.

Visualizing data in Kibana

Now that we have logged events in Kibana, let’s use it to explore the data!

First of all, recreate the index pattern in Kibana and this time make sure to select “Index contains time-based events”, selecting the field dateTime as the “Time-field name”.

Next, fire up your app and navigate through some pages to get a few events logged. Also feel free to add code for throwing exceptions at some endpoint, so we can see the exception data logged as well.

After using your application a bit, go to the Discover page in Kibana where you should see a number of events logged, ordered by the dateTime field (By default data is filtered to the last 15 minutes, but you can change that in the upper right corner):

visualizing-logged-events-kibana

Figure 14, visualizing the logged events in Kibana. Select the time range in the upper right corner

Try entering exception in the search bar and notice how it filters down the events to the ones that contain the word exception in any of the analysed text fields. Then try searching for a specific exception type (remember we used a keyword field for it!).

You can also try to search for a specific URL in two different ways as in /Home/About and path:"/Home/About". You will notice how the first case includes events where the referrer was /Home/About, while the second correctly returns only events where the path was /Home/About!

Once you have familiarized yourself with the data and how it can be queried, let’s move to creating a couple of interesting graphs for our data.

First we are going to create a graph showing the number of exceptions logged per minute.

  • Go to the Visualize page in Kibana and create a new Vertical bar chart.
  • Leave the Count selected for the Y-axis and add a date histogram on the X-axis.
  • Set the interval as minutes and finally add a filter hasException:true in the search box.

You will get a nice graph showing the number of exceptions logged per minute:

exceptions-logged-per-minute

 

Figure 15, exceptions logged per minute

Next, show the number of messages logged for each category over time, limited to the top 5 chattier categories:

  • Go to the Visualize page in Kibana and create a new Line chart.
  • Again leave the Count selected for the Y-axis and add a date histogram on the X-axis, selecting dateTime as the field and minutes as the interval.
  • Now add a sub-bucket and select “split lines”. Use “significant terms” as the sub aggregation, category as the field and size of 5.

This will plot something similar to the following:

kibaba-chattier-categories

Figure 16, chattier categories over time

Try adding some filters on the search box and see how it impacts the results.

Finally, let’s add another graph where we will see the top five most repeated messages for each of the top five categories with more messages.

  • Go to the Visualize page in Kibana and create a new Pie chart.
  • As usual, leave the Count selected for the Y-axis
  • Now split the chart in columns by selecting “Terms” as the aggregation, “category” as the field, count as the metric and 5 as the limit.
  • Then split the slices by selecting “Terms” as the aggregation, “message.keyword” as the field, count as the metric and 5 as the limit.

Once you have these settings in place, you will see a chart similar to mine:

kibana-dotnet-core-chart

Figure 17, most frequent messages per category

Take your time and inspect the data (the percentage and actual message/category is shown hovering on the chart elements). For example you will realize that the exceptions are logged by the DeveloperExceptionPageMiddleware class.

Conclusion

Elasticsearch is a powerful platform for indexing and querying data. While it is quite impressive on its own, combining it with other applications like Kibana makes it a pleasure to work with in areas like data analysis, reporting and visualization. You can achieve non-trivial results almost from day one just by scratching the surface of what this platform provides.

When it comes to .Net and .Net Core, the official Elasticsearch APIs have you covered as they support .Net Standard 1.3 and greater (they are still working on providing support for 1.1).

As we have seen, using this API in an ASP.Net Core project has been straightforward and we could easily use it as the storage of a REST API and as a new logger provider added to the application.

Last but not least, I hope you enjoyed using docker as you followed along. The freedom for trying applications like Elasticsearch is a revelation, but at the same time, is just a tiny fraction of what docker can do for you and your team.

Download the entire source code of this article (Github).

This article has been editorially reviewed by Suprotim Agarwal.

Absolutely Awesome Book on C# and .NET

C# and .NET have been around for a very long time, but their constant growth means there’s always more to learn.

We at DotNetCurry are very excited to announce The Absolutely Awesome Book on C# and .NET. This is a 500 pages concise technical eBook available in PDF, ePub (iPad), and Mobi (Kindle).

Organized around concepts, this Book aims to provide a concise, yet solid foundation in C# and .NET, covering C# 6.0, C# 7.0 and .NET Core, with chapters on the latest .NET Core 3.0, .NET Standard and C# 8.0 (final release) too. Use these concepts to deepen your existing knowledge of C# and .NET, to have a solid grasp of the latest in C# and .NET OR to crack your next .NET Interview.

Click here to Explore the Table of Contents or Download Sample Chapters!

What Others Are Reading!
Was this article worth reading? Share it with fellow developers too. Thanks!
Share on LinkedIn
Share on Google+

Author
Daniel Jimenez Garciais a passionate software developer with 10+ years of experience who likes to share his knowledge and has been publishing articles since 2016. He started his career as a Microsoft developer focused mainly on .NET, C# and SQL Server. In the latter half of his career he worked on a broader set of technologies and platforms with a special interest for .NET Core, Node.js, Vue, Python, Docker and Kubernetes. You can check out his repos.


Page copy protected against web site content infringement 	by Copyscape




Feedback - Leave us some adulation, criticism and everything in between!