CSLA Remote Data Portal Security

Published
Monday, 4 April 2022
Category
CSLA
Author
Andrew H

CSLA and Networking

CSLA is a framework in the .NET ecosystem intended to reduce the cost of building and maintaining applications. The core ideas have been around for a long time - longer than the .NET ecosystem has existed - but the framework remains just as relevant today as it was when it was first introduced.

One of the key goals of CSLA is to provide flexibility in the deployment of applications. Rather than bake in a fixed and inflexible architecture that forces the developer to make a decision on the most appropriate deployment model early in the life of a system, CSLA supports making that decision late in the day - at (very close to) deployment time.

Agility is about being able to pivot; to change direction quickly and without a significant rewrite - or refactor as it has become increasingly named! Using an architecture that provides flexibility in deployment helps us achieve greater agility.

CSLA achieves this feat through the use of the data portal. The data portal is an abstraction over the networking involved in a distributed, or potentially distributed, system. The framework seeks to identify a separation between logical client-side operations and logical server-side operations. If we employ this separation, we can then decide during deployment whether to run them in the same process, or in different processes, perhaps with a network between them.

Sadly, the power of the data portal is often overlooked. I have sometimes referred to this as the most misunderstood technology of the last 25 years, such is the scale of the misunderstanding. The need to recognise and understand the power of the separation provided by the data portal has been a key part of adoption: those who understand it will use the framework, whereas those who do not may be put off by the challenges the alternate conventions can introduce.

The data portal is very powerful, but (as the saying goes) with great power comes great responsibility. It can be easy to overlook the trade-offs being made with the decisions that are encompassed by the framework, and one of the areas where this may be a problem is with security. Different deployment models bring with them different threat models, and it is important that you consider the risks associated with the deployment model you choose and mitigate those risks as part of your solution.

CSLA and Blazor

Blazor is a great web technology. Since I first saw Steve Sanderson's talk at NDC Oslo where he revealed the concept to the world, I've been pretty convinced that it (or something like it) would revolutionise web development for the better. Productivity can be greatly enhanced by using Blazor, so it's a technology I recommend for new web development.

CSLA played an important part in my interest in Blazor. I only learned about its existence when I saw a blog post from Rocky talking about trialling whether CSLA could work with Blazor. Once I had understood what Blazor was, and the way the two could work so well together, I was hooked. If the two could play nicely together, it gave users of CSLA a great way to enhance their web solutions, migrating them to newer and more productive technologies. That was a very attractive proposition.

Blazor offers two different hosting models: Blazor server and Blazor webassembly. Blazor server runs all of the .NET code on the server and streams DOM updates to a JavaScript client over a websocket connection. Blazor webassembly instead runs .NET code in the browser on a webassembly based runtime. This is very different from the point of view of architecture. Accessing privileged data from the server needs to be handled differently between these two.

Details of the differences in the hosting models can be found in the Microsoft docs

In Blazor server it is likely that the server on which the .NET code is running is within a network that can directly access backend data stores or services. In Blazor webassembly the .NET code is running on the client device, and for a public system running over the internet that means it is an untrusted device which is unlikely to be allowed to access a secure backend store or service directly - especially a database.

Traditionally, demonstrations of the tech suggest handling the difference between the models by creating services, injected through DI, that implement their data access in different ways. This allows for the simplest of solutions - more direct data access - to work for Blazor server sites, whereas an extra network hop is used for Blazor webassembly. Normally this includes introducing an API layer - such as a REST API - at the boundary of the secure network through which the webassembly app will request the data it needs. This ensures that authentication and authorisation can be applied in the API, to handle the change in the security context.

CSLA is able to offer a different solution to this change in topology. The data portal is already capable of being configured to run data access operations locally using LocalProxy, or remotely using one of a number of remote data portals, including HttpProxy. This allows any teams using CSLA to be able to support either Blazor runtime topology without changes to their core code, enabing them to transition between them with ease, and without introducing extra complexity.

If a remote data portal is used, CSLA manages the network operations. In the case of the HTTP data portal, this involves the host exposing a CSLA controller as the intermediary API through which client to server communication can occur. This replaces the need for developers to write and expose their own API, with the advantage that they can focus on developing their business solutions without slowing themselves down with plumbing code. However, the fact that CSLA deals with the networking for us does not absolve us of all responsibility. The CSLA controller is our security boundary between the untrusted client and the trusted server, and we must make sure we consider the implications of the boundary and be sure to apply the necessary protections to it.

Protecting the Remote Data Portal

In a lot of archtectures, the data portal enpoint of a remote data portal implementation is a trust boundary. It is intended that the endpoint sits on a trusted server, but is accessed from an untrusted client. Anything that the client does could represent an attempt to subvert the code that runs on the client machine; indeed, there is no guarantee that the call to the endpoint even comes from the code we wrote. We should assume that any call received is invalid until proven otherwise, and any decisions that are made on the client should be rechecked on the server.

Since as far back as 2008, CSLA's remote data portal implementation has included extensibility points for adding protection to enhance its security when necessary. However, in older framework versions there were no protections available in the box, leaving developers having to create and wire up their own implementations. They both had to know that the extensibility points existed and understand how and when to use them. This left a lot to chance, and I started to wonder if other developers were doing the work to identify the protections needed. In order to address this risk, in CSLA 6 we chose to enhance the framework by adding some default protections, hoping to reduce the risk of people deploying remote endpoints without the appropriate protection applied.

At the very least, we should probably be doing the following:

  1. Configure authentication to protect the security principal
  2. Check that the user is authorized to perform the task being carried out
  3. Check that the data being submitted meets the validation rules we have defined
  4. Minimise the amount of information returned from the server to the client, especially in the case of exceptions, as exceptions may contain sensitive information.

In CSLA 5.x, two of the four numbered tasks above were intended to be tackled via a single extensibility point. In CSLA 6 they are all addressed separately. Having them separate allows a greater degree of control; you may customise the solutions more easily and independently of one another.

Note that there is a difference between the defaults applied between CSLA 5.x and 6.x. Because the improvements backported to CSLA 5.x are being released in a patch it was considered inappropriate to enable them by default. Instead it is the developer's responsibility to enable all of the protections when using 5.x. In CSLA 6, any protection that places no additional requirements upon the consuming code is enabled by default. However, this is not true for all of the protections, as only a limited subset of hosts are able to fulfil the additional requirements brought about by the implementation details of one of the protection mechanisms.

Authentication

There are two parts to ensuring authentication is appropriately secured. Firstly, we should perform authentication on the server, not the client. Secondly, we need to stop the user principal from flowing from the client to the server, ensuring that authorisation decisions cannot be influenced by the user.

Ensuring that we perform authentication on the server not the client is not specifically to do with CSLA, but is a general industry recommendation. The client-side code of any SPA runs on an untrusted device, and the user of that device can manipulate anything that exists on the client. It is therefore vitally important that the server is used as the source of truth as to who the user is, and what permissions they hold (in the form of claims.) Exactly how to do this part varies by the identity service in use, and is therefore beyond the scope of this article. However, the overall outcome must be that the user's ClaimsPrincipal be loaded on the server by the identity service when a call is made into the remote data portal.

Beware! Authentication is a very complex subject, and should not be underestimated. Virtually no team will have sufficient knowledge to create a secure identity system of their own, so I emplore you: NEVER create your own. Instead, use an existing, trusted and well-respected identity system, and explore the extensibility points it offers if you want to customise the solution somewhat. For example, you can use ASP.NET Core Identity with a custom state store provider of your own making. A state store provider is largely only responsible for data access; the Identity system itself takes responsibility for everything else security-related, such as password hashing and management.

And so onto the second part: configuring CSLA. By default, CSLA seamlessly flows the user's security information from the client to the server as part of a data portal call, and as we have discussed above, this is not an appropriate behaviour if the remote data portal is exposed to untrusted clients. Instead, we must turn off this behaviour, or we are blindly trusting information that the client is capable of faking. This is achieved by changing the AuthenticationType property of CSLA in a specific - and probably unexpected - way.

CSLA was created for .NET Framework many years ago. .NET Framework was only available on Windows, so the setting we need carries this legacy with it. The only value of AuthenticationType for which CSLA performs special behaviour is if we set it to "Windows" - so this is what we must do. Not only must we use this special value (which is case-sensitive) but we must set it on both the client and the server. Setting it in only one of the two places will result in an exception being raised on every call.

In CSLA 5.x the following code is required to change this setting to a more appropriate value:

services.AddCsla(cfg => cfg
                .DataPortal()
                .AuthenticationType("Windows")
                );

In CSLA 6.x the syntax required to change this setting is slightly different. Changing AuthenticationType to its more appropriate value is instead done as follows:

builder.Services.AddCsla(cfg => cfg
                .DataPortal(dpo => dpo
                .AuthenticationType("Windows")
                ));

Remember: This setting is case-sensitive. You must use the exact string shown.

I'm not comfortable with the way this part of the data portal works at the moment. We are not applying an important security principle here, known as "secure by default". In today's security landscape we would no doubt write this code differently, defaulting the other around. It would be better if the framework were not to flow any security information from the client to the server unless it was specifically enabled. We would also use something other than a string for applying the setting, to minimise the risk of misconfiguration because of a typo.

Be very careful with this setting; it is all too easy to run in an insecure configuration without realising it.

Be sure to change this setting in both the client and server applications.

Authorization

CSLA provides the IAuthorizeDataPortal extensibility point for checking authorization. In CSLA 5.x this can also be used for initiating revalidation, but in CSLA 6 there is a more appropriate solution for the latter.

The new implementation of IAuthorizeDataPortal is Csla.Server.ActiveAuthorizer.

In CSLA 5.x, this protection must be enabled by specifying this implementation in your config file, or by using the following code:

services.AddCsla(cfg => cfg
                .DataPortal()
                .ServerAuthorizationProviderType(typeof(Csla.Server.ActiveAuthorizer))
                );

In CSLA 6.x, this protection is enabled by default. If you wish to override it, you may do so. The original NullAuthorizer remains if you wish to disable the protection completely, although I would not recommend doing so - this would leave your system vulnerable to attack.

Beware! Per-property authorization rules are applied on the client and are therefore also at risk of being subverted. However, it is not possible to reapply per-property authorizations on a server. Do not use per-property rules as your only protection against abuse of your functionality when using a remote data portal.

Revalidation

Originally there was no specific extensibility point for this protection mechanism; instead, it was envisaged that revalidation would be achieved as part of authorization. In CSLA 6.x we have reworked the IInterceptDataPortal extensibility point. This was originally intended for behaviours that can now be performed with DI, so the original purpose is no longer necessary. The rework allows the data portal to support multiple interceptors and they run later in the data portal's execution, thereby offering a good platform for any form of interception operation, including revalidation.

Aside: IInterceptDataPortal is now implemented using a little-known feature of DI - that a consumer can request multiple implementations by requesting an IEnumerable<T> rather than T itself. This allows for multiple interceptors to be used together, with each executed in the order in which they are registered. As a result, IInterceptDataPortal implementations can be combined into something akin to a request pipeline, with each implementation passing control to the next in the chain. This should offer a very powerful way to extend the data portal in the future.

In CSLA 5.x, revalidation is achieved by code within the ActiveAuthorizer class, so if you enable authorization using this class then you also get revalidation for free.

In CSLA 6.x, this task is instead offered by the Csla.Server.Interceptors.ServerSide.RevalidatingInterceptor class, which is added to the data portal intercept pipeline by default. To disable this revalidation you should remove the interceptor, although I would not recommend doing so - this would leave your system vulnerable to attack.

Exception Sanitization

Exceptions are a common area of risk; an exception can leak sensitive implementation details outside a trust boundary if not handled or sanitized. Since it is an intentional choice to support the throwing of exceptions from the server to the client via the data portal, we should not handle exceptions as such; instead we should log them and then return a safe, sanitized exception in their place.

CSLA provides the IDataPortalExceptionInspector extensibility point for inspecting and handling exceptions.

The new implementation of IDataPortalExceptionInspector is Csla.Server.SanitizingExceptionInspector.

In CSLA 5.x, this protection must be enabled by specifying this implementation either in your config file, or by using the following code:

services.AddCsla(cfg => cfg
                .DataPortal()
                .ExceptionInspectorType(typeof(Csla.Server.SanitizingExceptionInspector))
                );

In CSLA 6.x, this protection must also be enabled - it is NOT enabled by default. This is inappropriate for enabling by default as it places requirements upon the host application, namely that:

  1. Logging is enabled, so that ILogger can be resolved from DI
  2. A host builder is in use, so that IHostEnvironment can be resolved from DI

Enable protection as follows:

builder.Services.AddCsla(o => o
  .DataPortal(dpo => dpo
    .AddServerSideDataPortal(sso => sso.RegisterExceptionInspector<Csla.Server.SanitizingExceptionInspector>())
    ));

It is unfortunate that this implementation cannot be enabled by default. However, forcing additional, external requirements onto developers seemed inappropriate in a shared library. These additional requirements could cause so much confusion under certain circumstances - especially during initial experiments using CSLA - that it might deter people from adopting the framework at all.

Remember: This protection is not enabled by default. Be sure to enable it using the code shown unless your system cannot support the necessary logging and environmental dependencies.

Most Secure Configuration

Bringing all of these settings together, this section offers the most secure settings for the remote data portal in mixed-trust environments.

Server

The changes to be applied on the server are shown in this section.

For CSLA 5.x:

services.AddCsla(cfg => cfg
                .DataPortal()
                .AuthenticationType("Windows")
                .ExceptionInspectorType(typeof(Csla.Server.SanitizingExceptionInspector))
                .ServerAuthorizationProviderType(typeof(Csla.Server.ActiveAuthorizer))
				);

For CSLA 6.x

builder.Services.AddCsla(cfg => cfg
                .DataPortal(dpo => dpo
                .AuthenticationType("Windows")
                .AddServerSideDataPortal(sso => sso
                .RegisterExceptionInspector<Csla.Server.SanitizingExceptionInspector>())
                ));

Client

In order for the server-side authentication type setting shown above to work without throwing an exception, we also need to apply a similar change on the client.

For CSLA 5.x:

builder.UseCsla(cfg => cfg
                .DataPortal()
                .AuthenticationType("Windows")
                );

For CSLA 6.x

builder.Services.AddCsla(cfg => cfg
                .DataPortal(dpo => dpo
                .AuthenticationType("Windows")
                ));

Summary

The CSLA remote data portal endpoint is likely to warrant protection in most deployment topologies, as it is likely to represent a security trust boundary. Using Blazor WebAssembly is very likely to result in a topology that is at risk - but this is far from the only client technology that will introduce the risk.

CSLA 6.x now includes protection implementations in the box, and in most cases they are enabled by default. However, authentication must still be configured manually. Furthermore, exception sanitization must still be enabled by the developer when the host application is able to support the implementation provided - or create their own.

The protections created have also been backported to the CSLA 5.x branch. Now that 5.5.3 has been released, the 5.x line also includes implementations for these important protections. However, as they were added in a patch created for an important bugfix, they must be enabled by the developer - a decision taken to avoid the risk that people might encounter unexpected breaking changes to their host applications as a result of employing a patch release.

Check your deployments, consider the risks, and enable the necessary protections if you are using any of the remote data portal implementations in a mixed trust environment.