.NET Performance Question

Hey guys! It has been a while! I hope everyone’s well & staying safe!

As things have slowed down a little with work, I thought I’d take the time to ask a couple of questions around the .NET ecosystem & I was hoping that some of you more experienced .NET dudes would be able to point me in some decent directions. :sweat_smile:

So, without further ado, I’m currently working on a project that’s really interesting, but the tech isn’t aging so well, some parts of the application simply haven’t been maintained & there are parts that have simply been abused by past developers.

I’m doing my best to manage the balance of trying to build new features while maintaining old ones, e.g. this morning I managed to reduce the time it takes for one endpoint to go from 16+ seconds down to a mere ~300ms. If I had the time, I’d try to decrease the time taken even further! :slightly_smiling_face:

To cut through the baffling, I’ve started to implement cleaner coding practices to try & make the team follow, making life a lot easier for ourselves in terms of maintainability, extendibility, you name it.


Example Time

So in some API endpoint, I have the following code, it’s nothing too complicated & the complicated bits are all buried with some relatively simple layers of abstraction:

// Some other stuff... Auth, etc.

var docs = repo.GetDocuments().Select(g => g.ToJson()).ToList();

var states = repo.GetStates()
                    .Select(s => new PipelineStateViewModel {
                        Id = s.Id,
                        Colour = s.Colour,
                        Name = s.Name,
                        Order = s.Order,
                        Documents = PipelineStateViewModel.MapState(s.Id, docs),
                        Page = PipelineStateViewModel.GetPage(s.Id, docs),
                        PageSize = PipelineStateViewModel.GetPageSize(s.Id, docs),
                        Count = PipelineStateViewModel.GetCount(s.Id, docs)
                    }).OrderBy(s => s.Order)
                    .ToList();

var abandonedDocs = repo.GetAbandonedDocuments()
                        .Select(d => Mapper.Map(d, new DocumentListViewModel()))
                        .ToList();

var pipeline = new PipelineViewModel()
    .MapFrom(repo.GetDocumentType())
    .WithStates(states)
    .WithAbandonedTypes(abandonedDocs);

// Some other code again...

But what shocked me is how before I included .ToList, the performance of this endpoint was shocking, even though I used a Stopwatch to quickly measure the performance. What confused the hell outta me is how on the backend it was taking around 800ms, not great, but I can live with it for the time being. But in the browser, it would take like 3s.

Since adding in .ToList, it managed to cut the time taken to respond to 1.4s. I’ve been trying to do some homework on this & from what I’ve read online, if anything, adding a call to .ToList should’ve theoretically slowed it down a little.

This goes to show that even after a year, there’s still one hell of a lot about the .NET ecosystem that really baffles me. I’m really trying to learn to love it, I really am giving it the benefit of the doubt, I approached the ecosystem with a lot of hesitation & reservations. Unfortunately, I’ve not had a great time with it! :joy:

In all fairness to the .NET ecosystem, this is more likely due to the fact that I’m working with a LOT of spaghetti code, I mean this is enough spaghetti to put you off pasta for life. It’s slowly getting better, don’t get me wrong, but as a team, we still have one hell of a way to go! We’ve had all sorts of issues, one of the more nightmare mode ones being how we’ve got a memory leak deep within & it’s more due to time, budget & resource that we simply haven’t been able to fix it quite yet. :slightly_frowning_face:

But there have been issues that we’ve been having that just simply shouldn’t be issues, something as simple as routing. For the love of money, I cannot workout why sometimes some strings for some routes cause the entire thing to just catch fire & an entire controller instantly becomes inaccessible. Change it to something stupid & blam, it works. It’s not even like there’s another route that could conflict or anything like that, a part of my suspicion here is that there’s some config buried somewhere & I know we’re using some pretty dated dependencies also.

But that’s enough of my complaining about .NET, clearly a lot of developers love it, looking at questionnaires & whatnot. So I was hoping some wizard could point me down the path of enlightenment? :relieved:

It depends. IEnumerable is what you get back from a lot of things. Not sure what database you are using it might also return IEnumerable. Technically it can be a list because lists implement IEnumerable but the only thing it guarantees if you are working with that interface is that the thing is iterable once (most of the time multiple iterations work, but depends on the implementation). .select returns an IEnumerable and when you iterate it multiple times it runs again every time you do so (because it does not create a list for those object it will create). Technically even your database query could run twice, but probably it returns a list and then that´s not really a problem.

Can´t say for sure because I don´t know what your mapper is doing.


Just to illustrate this, you can have IEnumerables that are infinite or/and are only iterable once. For instance you can make an infinite sequence like this

public IEnumerable<int> NumberSequence() {
  var i = 0; 
  while(true) {
      yield return i++;
  }
}

And you can then .Take(10) on this to get values 0-9 and then do it again to get 10-19 etc. But you can never go back to 0 in the IEnumerable that would return.


It also goes futher than that in that things like .Count() may be terribly slow (or not). .NET is smart about it and tries to cast it to List or an Array and what not first. If any of them work it will use count or length on those. But if all fails it has to iterate the entier thing to get the count. So if you are using for loops with .Count in the condition. It´s possible you have a really bad time even with only one iteration.

So best to use foreach or the Enumerator you get from the Enumerable. Foreach is really just syntax sugar for getting the enumerator thing and looping with it. They are equivalent. Sometimes foreach isn´t flexible enough for whatever loop you need.

1 Like

Firstly I’d like to say thank you for your speedy input! :vulcan_salute:

We’re just using a good ol’ SQL server. The funny part is that with this instance, we’re handling the same amount of data regardless.

We’re just using an old version of Automapper, the only reason we’ve not upgraded to the latest version right now is purely because of the risk, there’s TONS of instances where it’s being used & setup… That in itself would be an unnecessarily large refactoring project/task! :rofl:

As for Take(x), that’s actually how we’re implementing pagination! :sweat_smile:E.g.

// Some other LINQ code... 

.Skip((page - 1) * pageSize)
.Take(pageSize)

// Some other LINQ code...

As for your advice on Count I’ll keep that in mind, thank you for that! - In the event that it is terribly slow, what might one suggest as a resolution around this? Fire up a manual query or something? :man_shrugging:

Again, new to .NET, probably a really stupid question & there’s just more devils in the detail that I’m unaware of! :joy:

Yeah, but you are probably doing that on the database. Linq there is not exactly equivalent to linq on in memory data structures. It looks the same, but if you check all of those parameters you pass into Where etc. are Expression<Func<Something>> as opponsed to Func<Something> its that single word that makes this be an Expression tree instead of a function. Your SQL thing (probably EntittyFramework) can loop threw the Expression tree to convert it to the appropriate SQL statement. Converting a straight up Func to SQL would be completely unmanageable madness. So you can absolutely not compare what is happening with linq in memory versus what happens with database queries.

Performance wise things are different that way in that when you do this

dataSource.OrderBy(x => Property).First()

In memory that is kind of suss imo. you are ordering the entier thing just to get the smallest. In a database query, no problem. That´s what you do and create an index for Property. I´ve seen this a lot on in memory lists (they even very recently added MinBy and MaxBy to linq for that exact reason). I feel like people are trying to hard sometimes to only use linq for no good reason. Seen much worse, this is just the most common. It appears loops are scaring people at this point. Sometimes loops are great! (obviously not for querying the database)

Try to only iterate once if possible. Like I said with foreach or Enumerator from the Enumerable. If there is no way around it or it just takes too much time right now .ToList is perfectly fine. Memory footprint unless you work with very large lists is not really the biggest concern I think. And also if you have a select that creates objects and you iterate twice you allocated more objects in total than you would have using ToList. The only difference being that with ToList they are all in memory at the same time. Still, the garbage collector has to clean up all those invividual objects after you. Unless, you just cannot reasonably fit the list in memory, I´d argue that´s worse.