Sunday, October 18, 2009

Concurrency Levels Tuning with Task Parallel Library (How Many Threads to Use?)

In the post ‘Scaling Up with Task Parallel Library’ we reviewed the way in which the TPL (Task Parallel Library) library facilitates the process of adding concurrency to applications. One of TPL’s features that I thought worth a separate discussion is that it enables developers to manually tune the amount of threads (concurrency level) that will be used to serve a given set of tasks.

Manual Concurrency Control?

Relaying on TPL to determine the concurrently levels is probably the best idea for most applications. As for now, TPL’s default configuration is to set the concurrency levels to one thread per processor while in cases of low CPU utilization it may take the liberty to add threads as appropriate. This scheme is often a good solution since having two or more threads running on the same processor result in time slicing overhead which can drastically harm performance.

The primary overheads of time slicing are 1) saving register state of a thread when suspending it, and restoring the state when resuming it, 2) restoring a thread's cache state, 3) and thrashing virtual memory.

However, the thread per processor scheme may result in low CPU utilization since a running thread may be interrupted (e.g by a mouse event) or blocked (e.g while waiting for I/O operation to complete) for a while, where while blocking the matching processor doesn’t do any actual work.

Since most of the tasks in the application do block some of the time (some for long periods), it’s safe to say that appropriate concurrency levels tuning leads to improvements in performance. The only problem is that it can be really hard to calculate the “appropriate” number of threads which will yield optimal performances, while miss calculating can lead to degradation in performance and unexpected behavior, one should take into account that high concurrency levels can result in excessive context switching or resource contention and that low concurrency levels can result in low CPU utilization.


Putting  All the Cores To Work - All the Time - With Minimum Contentions

The following figure (from MSDN) shows throughput (number of completed work items per second, while each work item has 100ms of execution time with 10ms of CPU usage and 90ms of waiting) as a function of concurrency level (number of threads) running on a dual-core 2GHz machine.


As you can see in the figure above, throughput peaks at concurrency level of 20 threads and degrades when concurrency level exceeds 25 threads while the degradation is mainly due to context switching. Clearly we can’t conclude that 20 is the right amount of threads for a dual-core machine, as in reality most work items don't wait 90% of the time. However, it’s pretty clear that we should consider the percentage of time that tasks will block in order to correctly tune the concurrency level – if high throughput is indeed our primary goal.

Since the TPL default policy is to use one thread per processor, we can conclude that TPL initially assumes that the workload of a task is ~100% working and 0% waiting, and if the initial assumption fails and the task enters a waiting state (i.e. starts blocking) - TPL with take the liberty to add threads as appropriate.

Calculating the ‘Appropriate’ Amount of Threads

The following simplified formula calculates the “ideal” number of threads according to the number of CPUs on the machine and the percentage of time that the tasks will block.

NumThreads = NumCPUs / (1 – BlockingTimeInPercentage)

Although we can’t really calculate the actual blocking time, this formula can give us a clue about the number of threads that will produce the best performance.

Calculating Maximum Speedup

Another way of addressing the problem is by calculating the maximum number of tasks that can run in parallel before beginning to see degradation in performance rather than a continued speedup. Amdahl's Law suggests the following equation (S is the percentage of the work that cannot be parallelized):

SppedUp = 1/(S + ((1 – S)/NumCPUs))

Concurrency Control in TPL (C# .NET Source Code)

In the code snippets bellow the ‘idea threads per processor’ is set to 2 yielding concurrency level of 4 threads for dual-core machine. This means that the TPL will ideally use 4 threads to parallel the 3 tasks that were assigned with the scheduler/parallel-options. However, TPL will exceed 4 threads if some of the tasks are blocking in order to keep the CPU utilization high.

Visual Studio 2008 – With TaskManagerPolicy
   1: ThreadPriority priority = ThreadPriority.Highest;
   3: TaskManagerPolicy defaultPolicy = TaskManager.Default.Policy;
   4: int minProcessors = defaultPolicy.MinProcessors;
   5: int idealProcessors = defaultPolicy.IdealProcessors;
   6: int idealThreadsPerProcessor = 2;
   7: int maxStackSize = defaultPolicy.MaxStackSize;
   9: TaskManagerPolicy policy = new TaskManagerPolicy(
  10:     minProcessors, idealProcessors, idealThreadsPerProcessor, 
  11:     maxStackSize, priority);
  13: TaskManager tm = new TaskManager(policy);
  15: Task t1 = Task.Factory.StartNew(delegate { taskWork(); }, tm);
  16: Task t2 = Task.Factory.StartNew(delegate { taskWork(); }, tm);
  17: Task t3 = Task.Factory.StartNew(delegate { taskWork(); }, tm);
Visual studio 2010 - With ‘ParallelOptions’

   1: const int idealThreadsPerProcessor = 2;
   3: int concurrencyLevel =
   4:     Environment.ProcessorCount * idealThreadsPerProcessor;
   6: ParallelOptions options = new ParallelOptions 
   7: {
   8:     MaxDegreeOfParallelism = concurrencyLevel
   9: };
  11: Parallel.For(0, 3, options, delegate(int param) 
  12: {
  13:     taskWork(); 
  14: });

Further Reading

Why Too Many Threads Hurts Performance, and What to do About It.

CLR Inside Out - Using concurrency for scalability (Joe Duffy)

Saturday, October 3, 2009

UML Use Case Diagrams – Modeling the System Functionality

UML Use case diagrams are used to illustrate the behavior of the system during requirements analysis, they show system wide use cases and point out which use case is performed for which actor.

A use case describes a sequence of actions that make up one or more business process and provide something of measurable value to an actor, an actor is a person, organization, or external system that plays a role in one or more interactions with your system.

In this post we’ll take a step by step tour through constructing a use case diagram and writing a use case.

Constructing a Use Case Diagram

Just after the system engineers are done translating the customer requests into system requirements - they start designing the business processes that will implement the requirements while taking advantage of the use-case diagram in order to describe the various business processes from the standing point of the user.

The figure bellow shows a use case diagram which present the processes involved in an ‘automatic threats detection’ system which periodically processes images coming from a surveillance video camera and uses an external threats evaluation system in order to detect threats approaching towards a secured area.


As you can see the diagram consist of 5 use cases that describe the 1) system initialization process which can be initiated by a simple user or by an operator user, 2) the login process which can be initiated only by an operator user, 3) the image processing process which is driven by a video stream coming from the surveillance camera, 4) the threat evaluation process which is part of the image processing process and uses a sub-system which is referred to as ‘threat evaluation system’, 5) the threats distribution process which is enabled only under some conditions.  

The Relationships

The ‘User’ actor is associated with the ‘Initiate System’ use case which implies that the there’s a business process that describes what happened during system initialization, and that the ‘User’ is one of the actors that can trigger it.

The ‘Evaluate Threat’ use case is associated with the ‘Threat Evaluation System’ actor which implies that during the threat evaluation process the ‘Threat Evaluation System’ is called into action. 

The generalization relationship between the ‘User’ and the ‘Operator’ actors implies that all the use cases associated with the ‘User’ (parent) actor are inherently associated with the ‘Operator’ (child) actor.

The ‘Login’ use case depend on the ‘Initiate System’ use case which means that the operator is not allowed to initiate the login process if the system wasn’t initialized before. The same goes for the ‘Process Camera Image’ use case that doesn’t start until the system in fully initialized.

The ‘Process Camera Image’ use case includes the ‘Evaluate Threat’ use case which implies that the behavior of the ‘Evaluate Threat’ use case is inserted into the behavior of the ‘Process Camera Image’ use case, or in other words, “The act of processing camera image includes a sub process of evaluating the image for threats”.

We often use the ‘Include’ association to extract common behaviors from multiple use cases into a single description, but it’s also common to use the ‘Include’ association to divide a use case (container) into several smaller use cases (parts).

The ‘Distribute Threats Information’ use case extends the ‘Evaluate Threat’ use case which implies that the behavior of the ‘Distribute Threats Information’ use case may be inserted in the ‘Evaluate Threat’ use case under some conditions, or in other words, “In case the distribution feature is enabled, the threats evaluation process is extended with the distribution of the detected threats”.

Writing a Use Case

After drawing an overall view of the system use-cases - we switch from the UML tool to our favorite document editor and start describing the use cases one by one. We’ll usually use a template that consist of several core sections that the engineering team agreed upon. The template that I use consists of the sections brought to you in the following description.


The ‘Per Conditions’ sections defines the conditions that must be met in order for the use case to execute. The ‘Triggers’ section defines one or more actions that can trigger the use case. The ‘Main Success Scenario’ sections describes the actions that make up the use case – one can be brief and provide few sentences summarizing the actions and one can provide in detail review of each action, include alternative paths etc. The ‘Post Conditions’ section describes what the change in state of the system will be after the use case completes.


PPT - Use Case Diagrams

Friday, September 4, 2009

UML Deployment Diagrams – Modeling the System Physical Architecture

In the previous post we saw how component diagrams can be used to model the logical architecture of a system. In this post we’ll see how deployment diagrams are used to model the physical architecture of a system; we’ll start from the most simple use of the deployment diagram in which we only present the nodes and their inter-relationships, and complete the picture by including the components and the applications that run in the nodes.

Connecting the Nodes

Very early in the system life time - deployment diagrams are used to show the nodes (computers, virtual machines) and the external devices (if there are any) which construct the system. A ‘node’ usually refers to a computer which can be stereotyped as server, client, workstation etc. A ‘device’ is a subclass of ‘node’ which refers to a resource with processing capability such as camera, printer, measurement instrument etc. The nodes and the devices are usually wired though the ‘Communication Path’ connector which illustrates the exchange of signals and messages between both ends.


Notice that the client node is stereotyped as ‘pc-client’ (indicated by the icon) and the server node is stereotyped as ‘pc-server’.

The following diagram shows the deployment architecture of a scalable, fault tolerant ‘Camera control and image processing’ system . The system consist of N servers, load balancer with redundancy, and several clients.


The client machines present live state of all the cameras available in the system, and allow the user to control the cameras and initiate all kind of activities on the servers. The load balancer process the inputs that it receives from the clients and send the appropriate instructions to the appropriate server, it is designed to gracefully scale to increasing number of servers. Since the load balancer is a single point of failure, a passive load balancer (that maintains copy of the active load balancer state) run in the background, ready to replace the active load balancer in case of a crush.  All the servers run the same application, they support different kinds of cameras and can be configured to manage up to 200 cameras of different kinds.

Including the Components

In the next stage we are ready to put in the components that run in the physical nodes. As indicated in the previous post, when using components to model the physical architecture of a system (as in this case) the term ‘component’ refers to dll, or some executable.

The following figure shows snapshot of the above diagram with the addition of the components that reside in the nodes. 


As you can see the client node includes the ‘CamerasC2C.Client’ component which uses infrastructure level controls reside within ‘Company.Contorls’ which includes classes which derive from framework level controls (notice the use of stereotypes to divide the components to levels/layers).  The ‘CamerasC2C.Client’ component communicate with the load balancer ‘CamerasC2C.LoadBalance’ component, which transfer instructions to the appropriate server through the ‘IServer’ interface. The server consist of 3rd party components that were shipped with the cameras hardware, each component exposes interface though which the camera can be controlled, the ‘CamerasC2C.Server.Cameras’ component includes adapter classes which wrap the 3rd party interfaces and expose matching interfaces that fit to the systems requirements and speak the system language (uses system level classes etc), the ‘CamerasC2C.Server.Core’ component uses the interfaces exposed by the ‘CamerasC2C.Server.Cameras’ in order to command the cameras as appropriate.

Presenting the Applications

In order to show the applications that run on the different nodes and the components that make up the applications – we use artifact wired to nodes through the ‘deploy’ connector, and wired to components through the ‘manifest’ connecter.


Presenting External Applications

In order to show the way in which the system interact with external applications - artifacts can be used to represent the external application as illustrated in the following diagrams.


The ‘CamerasC2C.Server.Cameras’ component encapsulates the communication with external application called ‘BMC Camera Control Application’ which reside within the server ‘BLC Machine’.

Saturday, August 22, 2009

UML 2.0 Component Diagrams – Modeling the System Logical Architecture

In UML 2.0 Component diagrams are used to model the logical architecture of a system by showing the system high level components and their inter-relationships. In the next post I will show how components are used in deployment diagrams to model the physical architecture of a system. 

A component is an encapsulated unit within a system which provide one or more interfaces. When using components to model the logical architecture (solely in component diagrams) of a system the term ‘component’ refers to collection of classes which can be reused and replaced as a whole, when a single logical component can scattered around multiple physical nodes. When using components to model the physical architecture of a system (usually in deployment diagrams, but some people that are still custom to UML 1.x still use it in component diagrams) the term ‘component’ refers to dll, or some executable.

Simple Wiring

Abstract Connector

The most simple and abstract way to illustrated relationships between components is by using the dependency connector which can refer to wide range of dependent relationships like realization and usage


By wiring two components with the dependency connector we state that some of the classes/interfaces in component2 are being required/realized/implemented by classes/interfaces from component1.

Generalization Connector

To show that some classes in component1 derive from classes in component2 we use the ‘generalization’ connector.


Overall Look

Lets see how component diagram with simple wiring can be used to model a simple orders management application which includes UI editors, repository, and web service.

Notice that stereotypes are being be used to divide the components to layers in order to distinguish between application components (views, controllers, presenters), domain components (business logic, data access layer) and infrastructure components.


The ‘Orders Management’ and ‘Admin’ components include application level classes such as UI editors and presentation logic classes, which derive from framework level classes that reside within ‘System.Windows.Forms’ (Control, Component, UserControl, Form etc). The ‘Orders’ component includes domain level classes such as repository which uses web-service classes from ‘WebServices.Orders’ component . Classes from both ‘Admin’ and ‘Orders’ components use the infrastructure level ‘Login’ component in order or login to the system.

Interfaces Wiring

In order to be more specific regarding to the relationships between the components - we show the interfaces that the components expose and require, and the way in which they are wired together.

Socket Connector

The socket connector which is new to UML 2.0 shows that a component requires a specific interface. The figure bellow shows that the component ‘Orders Management’ requires ‘Orders Repository’ interface.


Lollipop Connector

The lollipop connector shows that a component exposes a specific interface.


Wiring the Connectors - Realization

The connectors can be wired together to show that one (or more) of the classes/interfaces of a component realizes an interface exposed by other component. In this case the ‘Orders’ component contains a class that realizes the ‘IOrdersWebService’ interface exposed  by the ‘WebServices.Orders’ component.


Wiring the Connectors – Usage

In other cases the connectors can be wired together to show that one component requires an interface which the other component exposes. In this case the ‘Orders Management’ component requires the ‘Order Repository’ interface of the ‘Orders’ component.


UML 2.0 introduces the assembly connector that shows the exact same thing.


Overall Look

The following figure shows how the same architecture from the ‘simple wiring’ section can be modeled in more details by pointing out the interfaces that the components expose and require.


Now we can see that the ‘Orders’ component realizes the ‘IOrdersWebService’ interface and uses some of the classes of ‘WebServices.Orders’ component.

The ‘Orders’ and the ‘WebServices.Orders’ components execute on different nodes. The ‘Orders’ component (client) contains a proxy class which realizes the ‘IOrdersWebService’ interface. The ‘WebServices.Orders’ component (server) contains a web service class which also implements the ‘IOrdersWebService’ interface. The proxy initiate remote calls on the web service object through the ‘IOrdersWebService’ interface.

The ‘Login’ component exposes the ‘ILoginManager’ interface through which the ‘Orders Management’ and the ‘Admin’ components login to the system.

Both ‘Login’ and ‘Admin’ components contain proxy object that initiate remote calls on the the web service object that resides within ‘WebServices.Users’ through the ‘IUsersWebService’ interface.


Sunday, August 2, 2009

MVVM for .NET Winforms – MVP-VM (Model View Presenter - View Model) Introduction

This post introduces the MVP-VM (Model View Presenter – Model View) design pattern, which is the windows forms (winforms) equivalent of WPF/Silverlight MVVM. The MVP-VM pattern is best suited to winforms applications that require full testing coverage and use data binding extensively for syncing the presentation with the domain model.



Before we start digging deep into MVP-VM, lets have a quick review of the patterns from which it has evolved.


Presentation Model

Martin Fowler introduced the Presentation Model pattern as a way of separating presentation behavior from the user interface, mainly to promote unit testing. With Presentation Model every View has Presentation Model that encapsulates its presentation behavior (such as how to handle buttonXXX click) and state (whether a check box is checked/unchecked).


Whenever the View changes it informs its Presentation Model about the change, in response the Presentation Model changes the Model as appropriate, reads new data from the Model and populates its internal view state. In turn, the View updates the screen according to the Presentation Model updated view state. 

The downside of this pattern it that a lot of tedious code is required in order to keep the Presentation Model and the View synchronized. A way to avoid writing the synchronization code is to bind the Presentation Model properties to the appropriate widgets on the View such that changes made to the Model will automatically reflect on the View, and changes made by the user will automatically flow from the View, through the Presentation Model to the underlying Model object.

MVVM  (Model View View Model) for WPF

MVVM (Model View View Model) introduces an approach for separating the presentation from the data in environments that empower data binding such as WPF and Silverlight (see Developing Silverlight 4.0 Three Tiers App with MVVM). As you can see in the picture bellow, MVVM is almost identical to the Presentation Model pattern, just instead of 'Presentation Model' – we have 'View Model' and the two way data binding happens automatically with the help of WPF/Silverlight runtime (read more). 


With WPF, the bindings between View and View Model are simple to construct because each View Model object is set as the DataContext of its pair View. If property value in the View Model changes, the change automatically propagate to the View via data binding. When the user clicks a button in the View, a command on the View Model executes to perform the requested action. The View Model, never the View, performs all modifications made to the Model data.

MVP-VM (Model View Presenter - View Model)

Starting from .NET framework 2.0, Visual Studio designer supports binding objects to user controls at design time which greatly simplifies and motivates the use of data binding in winforms applications. Even when designing simple UI without the use of any fancy pattern – it often makes sense to create View Model class that represent the View display (property for every widget) and bind it to the View at design time. You can read all about it in ‘Data Binding of Business Objects in Visual Studio .NET 2005/8’.

When creating .NET winforms application that consist of many Views that present complex domain model and includes complex presentation logic - it’s often makes sense to separate the Views from the domain model using the Model View Presenter pattern. One can use Supervising Controller or Passive View depending on the required testing coverage and the need for data binding.

With Supervising Controller data binding is simple but presentation logic cannot be fully tested since the Views (that are usually being mocked) are in charge of retrieving data from the Model. With Passive View the thin Views allow full testing coverage and the fact that the Presenter is in charge of the entire workflow greatly simplify testing. However, direct binding between the Model and the View is discouraged. For more details please refer to ‘Model View Presenter Design Pattern with .NET Winforms’.

MVP-VM is about combining the two patterns so we wont have to give up on Data Binding nor cut down on testability. This is achieved by adapting the Passive View pattern while allowing an indirect link between the Model and the View.

MVP-VM Overview


The View is in charge of presenting the data and processing user inputs. It is tightly coupled to the Presenter so when user input is triggered (a button has been clicked) it can directly call the appropriate method on the Presenter. It’s widgets are bound to the matching View Model properties such that when a property of the View Model changes – the linked widget is being changed as a result, and when the widget value changes – the View Model property is being changed as a result.

The View Model exposes properties that are bound to its matching View widgets. Some of its properties are linked directly to the Model object such that any change made to the Model object automatically translate to change on the View Model and as a result appear on the View and vise versa, and some of its properties reflect View state that is not related to Model data, e.g whether buttonXXX is enabled. In some cases the View Model is merely a snapshot of the Model object state so it exposes read-only properties. In this case the attached widgets cannot be updated by the user.

The Presenter is in charge of presentation logic. It creates the View Model object and assign it with the appropriate Model object/s and bind it to the View. When its being informed that a user input has been triggered it executes according to application rules e.g. command the Model to change as appropriate, make the appropriate changes on the View Model etc. It is synchronized with the Model via Observer-Synchronization so it can react to changes in the Model according to application rules. In cases were it’s more appropriate for the Presenter to change the View directly rather than though its View Model, the presenter can interact with the View though its interface.

The Model is a bunch of business objects that can include data and behaviors such as querying and updating the DB and interacting with external services. Such objects that only contain data are referred to as ‘data entities’.

How does it Work?


As you can see in the figure above, each UI widget is bound to a matching property on the ‘customer view model’, and each property of the ‘customer view model’ is linked to a matching property on the ‘customer data entity’. So for example, when the user changes the value of the ‘Name’ textbox – the ‘Name’ property of the ‘customer view model’ is automatically updated via data binding, which causes the update on the ‘Name’ property of the ‘customer data entity’. In the other direction, when the ‘customer data entity’ changes – the changes reflect on the ‘customer data model’ which causes the appropriate widgets on the view to change via data binding.

When the user clicks on the ‘Save’ button, the view responds and calls the appropriate method on the presenter, which responds according to application logic, in this case - it calls the ‘Save’ method of the ‘customer dao’ object.

In cases where the ‘application logic’ is more sophisticated, the presenter may bypass the ‘view model’ and make direct changes on the view through its abstraction. In some cases ‘view model’ property can be linked to view widget at one side – but not linked to model object at the other side, in such cases the ‘view model’ will be prompt to change by the presenter, which will result in the appropriate change on the view widget.

Case Study – MVP-VM

In the following case study the MVP-VM pattern is used to separate the concerns of a simple application that present list of customers and allows adding a new customer. We’ll focus on the ‘Add Customer’ screen.


Class Diagram – Add New Customer


The AddCustomerPresenter holds references to AddCustomerViewModel, AddCustomerView and CusomerDao (model). It references the AddCustomerViewModel and the AddCustomerView so it can establish data binding between the two, and it references the CussomerDao so it can change it and register to its events.

The AddCustomerView holds reference to the AddCustomerPresenter so it can call the ‘SaveClicked’ method when the ‘Save’ button is clicked.

Sequence Diagram - Initialization


The AddCustomerView is instantiated by some class in the application and injected with instance of the CusomerDao (model), it instantiate the AddCustomerPresenter and injects it with the CusomerDao and with itself. The AddCustomerPresenter prompts the CusomerDao to create a new CustomerDataEntity, instantiate the AddCustomerViewModel injecting it with the newly created CustomerDataEntity, and calls ‘ShowCustomer’ on the AddCustomerView in order to data bind it to the AddCustomerViewModel.

Sequence Diagram - Saving New Customer


The AddCustomerView responds to click on the ‘Save’ button and calls the appropriate method on the AddCustomerPresenter. The AddCustomerPresenter calls ‘ReadUserInput’ on the AddCustomerView which in response alerts its internal ‘binding source’ to reset binding, which causes the content of its widgets to reread into the AddCustomerViewModel (read more about data binding of business objects). The AddCustomerPresenter than evaluates the CustomerDataEntity (which was updated automatically since it’s linked to AddCustomerViewModel) and checks whether the new customer already exist in the data storage. In case there are no duplications it commands the CusomerDao (model) to save the customer.

Here’s the code:


   1: public partial class AddCustomerView : Form, IAddCustomerView
   2:  {
   3:      private AddCustomerPresenter m_presenter;
   5:      public AddCustomerView(ICustomerDao dao)
   6:      {
   7:          InitializeComponent();
   9:          m_presenter = new AddCustomerPresenter(this, dao);
  10:      }
  12:      public void ShowCustomer(CustomerViewModel customerViewModel)
  13:      {
  14:          cusomerViewModelBindingSource.DataSource = customerViewModel;
  15:      }
  17:      public void ReadUserInput()
  18:      {
  19:          cusomerViewModelBindingSource.EndEdit();
  20:      }
  22:      public void ShowError(string message)
  23:      {
  24:          MessageBox.Show(message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Information);
  25:      }
  27:      private void m_btnSave_Click(object sender, EventArgs e)
  28:      {
  29:          m_presenter.SaveClicked();
  30:      }
  32:      private void m_btnCancel_Click(object sender, EventArgs e)
  33:      {
  34:          m_presenter.CancellClicked();
  35:      }
  36:  }


   1: public class AddCustomerPresenter 
   2: {
   3:     private IAddCustomerView m_view;
   4:     private ICustomerDao m_customerDao;
   5:     private CustomerViewModel m_viewModel;
   7:     public AddCustomerPresenter(IAddCustomerView view, ICustomerDao customerDao)
   8:     {
   9:         m_view = view;
  10:         m_customerDao = customerDao;
  12:         // Create the data entitry
  13:         CustomerDataEntity customerDataEntity = customerDao.CreateCustomerDataEntity();
  14:         CustomerViewModel customerViewModel = new CustomerViewModel(customerDataEntity);
  16:         m_viewModel = customerViewModel;
  18:         // Bind the ViewModel to the VIew
  19:         m_view.ShowCustomer(customerViewModel);
  20:     }
  22:     public void SaveClicked()
  23:     {
  24:         m_view.ReadUserInput();
  26:         CustomerDataEntity customerDataEntity = m_viewModel.CustomerDataEntity;
  27:         bool duplicateExist = !IsDuplicateOfExisting(customerDataEntity);
  28:         if (duplicateExist)
  29:         {
  30:             m_customerDao.Save(customerDataEntity);
  32:             m_view.Close();
  33:         }
  34:         else
  35:         {
  36:             m_view.ShowError(string.Format("Customer '{0}' already exist", m_viewModel.Name));
  37:         }
  38:     }
  40:     private bool IsDuplicateOfExisting(CustomerDataEntity newCustomerDataEntity)
  41:     {
  42:         CustomerDataEntity duplicateCustomerDataEntity = 
  43:             m_customerDao.GetByName(newCustomerDataEntity.Name);
  45:         return duplicateCustomerDataEntity != null;
  46:     }
  48:     public void CancellClicked()
  49:     {
  50:         m_view.Close();
  51:     }
  52: }


   1: public class CustomerViewModel
   2: {
   3:     private readonly CustomerDataEntity m_customerDataEntity;
   5:     public CustomerViewModel(CustomerDataEntity customerDataEntity)
   6:     {
   7:         m_customerDataEntity = customerDataEntity;
   8:     }
  10:     public string Name
  11:     {
  12:         get { return m_customerDataEntity.Name; }
  13:         set { m_customerDataEntity.Name = value; }
  14:     }
  16:     public string CompanyName
  17:     {
  18:         get { return m_customerDataEntity.CompanyName; }
  19:         set { m_customerDataEntity.CompanyName = value; }
  20:     }
  22:     public DateTime DateOfBirth
  23:     {
  24:         get { return m_customerDataEntity.DateOfBirth; }
  25:         set { m_customerDataEntity.DateOfBirth = value; }
  26:     }
  28:     public int Age
  29:     {
  30:         get
  31:         {
  32:             int age = DateTime.Now.Year - m_customerDataEntity.DateOfBirth.Year;
  34:             return age;
  35:         }
  36:     }
  38:     public CustomerDataEntity CustomerDataEntity
  39:     {
  40:         get { return m_customerDataEntity; }
  41:     }
  42: }


The case study can be downloaded from here or here