Designing loosely coupled Entity Framework based applications

The main purpose of designing loosely coupled applications is to reduce the dependency between the components of the system. This provides more flexibility, maintainability, modularity that are very important especially in long-term projects.

Dependency inversion

To achieve this it is highly recommended to follow the dependency inversion principle (one of
the SOLID principles):

  • High level modules should not depend upon low level modules. Both should depend on abstractions.
  • Abstractions should not depend upon details. Details should depend upon abstractions.

image

In practice this means that a class should not depend directly on another class (detail) but on an interface (abstraction). The TopComponent has a member (socket) that expects any component that implements the ILowComponent interface. In this way, the TopComponent does not depend on the LowComponent. The ILowComponent is defined by the top component, so actually the low component depends on the top component (inversed dependency). This programming technique is usually referred as component-based software engineering. In this blog post I will introduce a loosely coupled architecture for Entity Framework based applications that uses this method.

The demo application

The demo application manages the content of a database with a very simple scheme: a single table with a few data fields and an auto-generated primary key. I created a new Entity Framework model and imported the database schema.

image

Components

The main part of the demo application is a simple data component, that manages the content of the database. It can be described with the following interface:

public interface IDataService
{
    Person GetPerson(int id);

    void StorePerson(Person person);

    void DeletePerson(Person person);
}

How should be the GetPerson method implemented in a RAD project with the use of Entity Framework. This is a possible solution:

public Person GetPerson(int id)
{
    if (id < 0)
    {  
         return new Person();
    }

    using (ClubEntities context = new ClubEntities())
    {
        return context.People.Single(p => p.Id == id);
    }
}

There are multiple problems with this implementation:

  • Smaller problem: the component completely depends on Entity Framework.
  • Bigger problem: the initialization of the ClubEntities object context class is baked into the implementation of the component, so there will be no easy way to alter it later (imagine how many times the new ClubEntities() expression could be written is a large application).

The former can be solved by disabling the code generation of the EDM and implementing the entity classes in a plain old way and the object context class with component-based approach. This would need much work if there were many tables in the database, so instead I will apply this POCO T4 template to alter the code generation. I also modified it a little bit to make it generate a interface for the object context class. So I got this generated code:

public partial class Person
{
    public virtual int Id { get; set; }
    
    public virtual string Name { get; set; }
    
    public virtual string Title { get; set; }
}

public partial interface IClubEntities : IDisposable
{
    IObjectSet<Person> People { get; }
    
    ObjectStateManager ObjectStateManager { get; }
    
    int SaveChanges();
}

public partial class ClubEntities : ObjectContext, IClubEntities
{
   // Self-evident implementation
}

It surely could have been done in an even more ORM independent way, but this will be enough now.

The latter problem can be solved by decoupling the functionality: the data component should not instantiate the object context class, this role should belong to a separate component. Let this component be a factory component whose interface can be defined like this:

public interface IObjectContextFactory
{
    IClubEntities Create();
}

This is everything we need to be able to rewrite the implementation of the data service component.

public class DataService : IDataService
{
    private IObjectContextFactory contextFactory;

    public DataService(IObjectContextFactory contextFactory)
    {
        this.contextFactory = contextFactory;
    }

    public Person GetPerson(int id)
    {
        if (id < 0)
        {  
            return new Person();
        }

        using (ClubEntities context = this.contextFactory.Create())
        {
            return context.People.Single(p => p.Id == id);
        }
    }
}

The object context factory component is a member of the data service component and can be set through its constructor. The query method instantiates the object context with the use of the factory component. Notice that we were able to implement the data component without implementing the factory component. This is one of the main advantages of component-based software engineering.

The next task is to implement the factory component. The ObjectContext – that is the base of ClubEntities – class provides multiple constructors. One of them expects no arguments, it uses the default, baked-in connection string to initialize itself. The two others expect an EntityConnection object or a connection string. The former will be used this time. But how should the EntityConnection object be created? Let’s create another component that serves as provider for this kind of object. Its interface is defined like this:

public interface IConnectionProvider : IDisposable
{
    EntityConnection Connection { get; }
}

Note that this is a disposable component, because it also handles the lifetime of the provided connection object. If the provider component is disposed, than the connection object will be disposed too. The reason behind this will be explained later. For now, we can implement the IObjectContextFactory interface, it is really self-evident :

public class ObjectContextFactory : IObjectContextFactory
{
    private IConnectionProvider provider; 

    public ObjectContextFactory(IConnectionProvider provider)
    {
        this.provider = provider;
    }

    public IClubEntities Create()
    {
        return new ClubEntities(this.provider.Connection);
    }
}

So what is the motivation behind the connection provider component? If an ObjectContext class is instantiated with the empty or connection string constructor and it is disposed then the internal connection object will be disposed too. If the ObjectContext class is instantiated with an external connection object, it will not be disposed if the ObjectContext instance is disposed. The following sequence diagrams might help to understand the different mechanisms.

image

image

This means that the externally created EntityConnection instance should be manually disposed or leave it for the GC finalization (what is not recommended). You might ask why can’t we use the other constructor. My main reason that I want to ensure that the ObjectContext instances use the same EntityConnection instance. This has more reasons:

  • TransactionScope: if two connection objects are used in the same transaction scope, than the operations are might executed in a distributed transaction what needs DTC (despite one connection object would be enough).
  • Automated testing: this is explained in an other blog post

Since the purpose of the connection provider component is explained, a base implementation can be created: 

public abstract class ConnectionProviderBase : IConnectionProvider
{
    private EntityConnection connection;

    public EntityConnection Connection
    {
        get 
        {
            if (this.connection == null)
            {
                this.connection = this.CreateConnection();
            }

            return this.connection;
        }
    }

    protected abstract EntityConnection CreateConnection();

    public void Dispose()
    {
        if (this.connection != null)
        {
            this.connection.Dispose();
        }
    }
}

There is nothing complicated here. The component does lazy initialization: the connection object is only created when it is requested. If the component is disposed then the connection object will be disposed too (if it exists). It is really easy to create a concrete complement based on this implementation:

public class DefaultConnectionProvider : ConnectionProviderBase
{
    protected override EntityConnection CreateConnection()
    {
        return new EntityConnection("name=ClubEntities");
    }
}

This is quite self-evident, at this point there is no problem with baking the connection configuration in, it can be easily replaced by implementing another component derived from the same base class.

Assembling the components

All the required components are ready, there is only one thing to do: assemble them. The following component diagram shows the simplest configuration:

image

The façade component serves as a top-level component that is aware of the lifetime of the request currently served by the application. It holds a reference to the connection provider component that is used by all the object context factory components. In the appropriate time (typically in the end of the request), it disposes the connection provider component, that result in the disposal of the connection object. A simple implementation of the façade component could look like this:

using (IConnectionProvider provider = new DefaultConnectionProvider())
{
    IDataService dataService = new DataService(new ObjectContextFactory(provider));
    
    Person person = dataService.GetPerson(id);
    person.Title = "Software engineer";

    dataService.StorePerson(person);
}

It is easy to image how a complex application would look like. The following diagram is similar to previous, but it uses three types of data service to handle requests.

image

Note that all the object context factory and the façade service component uses the same connection provider component instance. This architecture ensures that the data service components will use the same connection object that also will be disposed in the appropriate time.

Conclusion

Designing loosely coupled applications is not self-evident. Finding the proper way of functionality decoupling requires deep understanding of the environment and planned features of the designed application.

How to enable remote Powershell?

Interestingly, it was quite confusing to find out how to setup it properly, so I decided to create a blog post about it.

On the server side, open Powershell with elevated privileges (Run as Administrator), an execute this command:

Enable-PsRemoting

Unfortunately, this will not be enough if any of your current network interfaces are not in a private or domain network. For example, if VMWare Workstation is installed on your system, its virtual network interfaces are categorized as members of unidentified networks. To overcome this issue, check the solution introduced by this post.

Now we can move to the client side. In the case of not being joined to a domain network, we have to mark the servers as trusted hosts. To achieve this, open cmd.exe (not Powershell) with elevated privileges and run this command:

winrm set winrm/config/client @{TrustedHosts="server1, server2"}

You can read or write the TrustedHosts property with Powershell too: run the Powershell command prompt with elevated trust and use the Get-Item and Set-Item cmdlets with the appropriate wsman path. The path is autocompleted, so it is much easier to type it here. You can also use the Concatenate switch to simply add a new host. In order to allow the client connect to any server, use the wildcard character.

Get-Item WSMan:\localhost\Client\TrustedHosts | select -ExpandProperty Value
Set-Item WSMan:\localhost\Client\TrustedHosts "server1,server2"
Set-Item WSMan:\localhost\Client\TrustedHosts "server3" -Concatenate -Force
Set-Item WSMan:\localhost\Client\TrustedHosts *

Now we can connect to the server with a Powershell session. On a domain network this might be enough:

Enter-PsSession server1

If you are on a non-domain network or your current user does not have the appropriate rights, you have to specify the appropriate credentials explicitly.

Enter-PsSession server1 -Credential (Get-Credentials)

You can also run multiple sessions simultaneously, and switch between them on demand.

$s = @("server1", "server2") | New-PsSession
Enter-PsSession $s[0]
Exit-PsSession
Enter-PsSession $s[1]
exit

Unfortunately these commands are only for interactive usage, so you are not able to use them in scripts. For remote script execution Invoke-Command is the way to go, but this a much longer story…

Exchange well-known folder hell

During an Exchange Server migration I have encountered an annoying problem. I have exported the mailboxes from an old Exchange 2003 server into PST files. After that, these files were imported into the newly created mailboxes of a new Exchange 2010 server. This process did not go absolutely smoothly, the following problem occurred:

The former figure presents what happens if the language of the new and exported mailbox are not identical (e.g. hu/HU => en/US), the latter shows how the mailbox look like if the the languages are identical (e.g. en/US => en/US).

owa2        owa1

I searched around the internet and I found out that this is a really common phenomenon and is said to be a bug. It seems that the well-known folders are referenced by their localized name instead of an invariant name in the PST files. Furthermore, if the folder already exists, the imported folder will be renamed (number is concatenated). Fortunately, I found a very impressive Powershell script on the Microsoft Technet forum that is meant to solve this issue.

After a bit examination and preparation the script worked flawlessly, it cleaned up the mess in the mailboxes nicely. I really appreciate the author‘s effort and generosity. I also worked on the script a bit too. I think it is much easier to use now.

First of all, the script is based on EWS 1.1, so you have to download and install it on the Exchange Server. (Update: It seems the EWS 1.1 link is dead, try EWS 1.2, it should work too)

Now, create a CSV file that describes the folder mappings:

folder,source
Inbox,"Beérkezett üzenetek"
Inbox,"Beérkezett üzenetek1"
Inbox,Inbox
Inbox,Inbox1
SentItems,"Elküldött üzenetek"
SentItems,"Elküldött üzenetek1"
SentItems,"Sent Items"
SentItems,"Sent Items1"
Notes,Feljegyzések
...

The first element of each row is the invariant name of a well-known folder (my sample file contains all of them), the second is a possible localized name that the script should look for.

In the following example, the script is going to be executed on my mailbox. First, import the Exchange Powershell snapin and the cleanup script.

Add-PSSnapin Microsoft.Exchange.Management.Powershell.E2010
. ./Cleanup-Mailbox.ps1

Query the problematic mailboxes, in this case: mine.

$m = Get-Mailbox tom

The user that executes the script needs Full Access permission on the mailboxes.

$m | Add-MailboxPermission -User Admin -Accessright Fullaccess

Now, run the cleanup script on the mailboxes by simply piping them to the Cleanup-Mailbox function. The Mapping parameter should point to the previously created CSV file. If the Test switch is specified, than the script will only output the diagnostics messages without doing the modifications. I suggest to try it before production use.

$m | Cleanup-Mailbox -Mapping ./folders.csv -Test

That’s it, my mailbox has been cleaned up! It is recommended to remove the Full Access permission.

$m | Remove-MailboxPermission -User Admin -Accessright Fullaccess

You can download the cleanup script from here, and the previously mentioned sample mapping CSV file here.

Protected: ASP.NET vs Node.js performance #1

This content is password protected. To view it please enter your password below:

Installing a Mercurial server on Debian

This guide describes how to setup a Mercurial server on Debian. It is designed to be a good a starting point to make the first steps easier. The provided scripts are tested on Debian Squeeze and has to be executed with root privileges, so if the current user is not root, type this before doing anything:

su root

First, install the required packages:

apt-get install mercurial apache2 libapache2-mod-wsgi

Create a directory for the future Mercurial repositories:

mkdir /var/repo
mkdir /var/repo/hg

Create a script file the automates the creation of a repository. It initializes a repository and grants the appropriate rights.

hg init $1

chmod 770 ./$1 -R
chown www-data.root ./$1 -R

Script to achieve this:

cd /var/repo/hg
echo "hg init \$1" > create.sh
echo "chmod 770 ./\$1 -R" >> create.sh
echo "chown www-data.root ./\$1 -R" >> create.sh
chmod +x create.sh

The Mercurial server is hosted by Apache, create a directory for its virtual host:

mkdir /var/www/vhosts
mkdir /var/www/vhosts/hg
mkdir /var/www/vhosts/hg/cgi-bin

We need a wsgi script and configuration file in the cgi-bin directory.

This sample configuration makes the server to show all repositories located in the previously created directory.

hgweb.config

[web]
style = coal
allow_push = *
push_ssl = false
[paths]
/ = /var/repo/hg/**

The stub of the wsgi script should be downloaded from the appropriate repository, but I just provide it here (I hope this does not result in licensing issues).  This script refers to the previously created configuration file. Unfortunately using an absolute uri is inevitable.

hgwebdir.wsgi

config = "/var/www/vhosts/hg/cgi-bin/hgweb.config"
import cgitb; cgitb.enable();
from mercurial import demandimport
from mercurial.hgweb import hgweb
application = hgweb(config)

Script that creates these files:

cd /var/www/vhosts/hg/cgi-bin
echo "[web]" > hgweb.config
echo "style = coal" >> hgweb.config
echo "allow_push = *" >> hgweb.config
echo "push_ssl = false" >> hgweb.config
echo "" >> hgweb.config
echo "[paths]" >> hgweb.config
echo "/ = /var/repo/hg/**" >> hgweb.config
echo "config = \"/var/www/vhosts/hg/cgi-bin/hgweb.config\"" > hgwebdir.wsgi
echo "" >> hgwebdir.wsgi
echo "import cgitb; cgitb.enable();" >> hgwebdir.wsgi
echo "" >> hgwebdir.wsgi
echo "from mercurial import demandimport; demandimport.enable()" >> hgwebdir.wsgi
echo "from mercurial.hgweb import hgweb" >> hgwebdir.wsgi
echo "" >> hgwebdir.wsgi
echo "application = hgweb(config)" >> hgwebdir.wsgi

The virtual directory that is going to be hosted by Apache has to be initialized, so create the corresponding site configuration file: /etc/apache2/sites-available/hg

<VirtualHost *:80>
 ErrorLog /var/log/apache2/hg-error.log
 CustomLog /var/log/apache2/hg-access.log common
 WSGIScriptAliasMatch ^(.*)$ /var/www/vhosts/hg/cgi-bin/hgwebdir.wsgi\$1
</VirtualHost>

Script to achieve this:

cd /etc/apache2/sites-available
echo "<VirtualHost *:80>" > hg
echo " ErrorLog /var/log/apache2/hg-error.log" >> hg
echo " CustomLog /var/log/apache2/hg-access.log common" >> hg
echo " WSGIScriptAliasMatch ^(.*)$ /var/www/vhosts/hg/cgi-bin/hgwebdir.wsgi\$1" >> hg
echo "" >> hg
echo "</VirtualHost>" >> hg

Almost there! The site and Apache has to be enabled and reloaded, respectively.

a2dissite default
a2ensite hg
/etc/init.d/apache2 reload

We are complete. Now, you can use this to create a new repository:

/var/repo/hg/create.sh repository_name

To access the repositories, simply browse this url: http://ipaddress_of_server/

Keep in mind that the provided scripts only serve as a starting point. For example, it is highly recommended to complete the site configuration file with the ServerName directive and provide authentication and authorization mechanism.

The complete script can be downloaded here.

Installing a Subversion server on Debian

This guide describes how to setup a SVN server on Debian. It is designed to be a good a starting point to make the first steps easier. The provided scripts are tested on Debian Squeeze and has to be executed with root privileges, so if the current user is not root, type this before doing anything:

su root

First, install the required packages:

apt-get install apache2 subversion libapache2-svn

Create a directory for the future Subversion repositories:

mkdir /var/repo
mkdir /var/repo/svn

Create a script file the automates the creation of a repository. It initializes a repository and grants the appropriate rights.

svnadmin create $1
chmod 770 ./$1 -R

chown www-data.root ./$1 -R

Script to achieve this:

cd /var/repo/svn
echo "svnadmin create $1" > create.sh
echo "chmod 770 ./\$1 -R" >> create.sh
echo "chown www-data.root ./\$1 -R" >> create.sh
chmod +x create.sh

The Subversion server is hosted by Apache, so create the corresponding site configuration file: /etc/apache2/sites-available/svn

This sample configuration makes the server to show all repositories located in the previously created directory.

<VirtualHost *:80>
 ErrorLog /var/log/apache2/svn-error.log
 CustomLog /var/log/apache2/svn-access.log custom

 <Location />

  DAV svn
  SVNParentPath /var/repo/svn/
  SVNListParentPath on
  SVNAutoVersioning on
 
 </Location>
</VirtualHost>

Script that creates this configuration file: cd /etc/apache2/sites-available

echo "<VirtualHost *:80>" > svn
echo " ErrorLog /var/log/apache2/svn-error.log" >> svn
echo " CustomLog /var/log/apache2/svn-access.log custom" >> svn
echo " >> svn 
echo " <Location />" >> svn
echo "  DAV svn" >> svn
echo "  SVNParentPath /var/repo/svn/" >> svn
echo "  SVNListParentPath on" >> svn
echo "  SVNAutoVersioning on" >> svn
echo " </Location>" >> svn
echo "</VirtualHost>" >> svn

The configuration is complete, now the site and Apache has to be enabled and reloaded, respectively.

a2dissite default
a2ensite svn

/etc/init.d/apache2 reload

It’s done. Now, we can use this to create a new repository:

/var/repo/svn/create.sh repository_name

To access the repositories, simply use this url: http://ipaddress_of_server/

Keep in mind that the provided script only serves as a starting point. For example, it is highly recommended to complete the site configuration file with the ServerName directive and provide authentication and authorization mechanism.

The complete script can be downloaded from here.