CAB, Model View Presenter, Passive View, and Humble Dialog

Topics: CAB & Smart Client Software Factory
Jul 18, 2007 at 3:38 PM
Edited Jul 18, 2007 at 3:43 PM
In trying to wrap my head around how solutions should be designed and componentized in SCSF/CAB, I've spent a bit of time trying to study up on Model View Controller (MVC) and Model View Presenter (MVP).

The packaged documentation, in my opinion, doesn't necessarily do a good job of covering these two topics and the variations of MVP that really make sense in CAB.

One of the best resources for discourse on these two topics is Martin Fowler's article on GUI Architectures, which provides a broad view of three of the most common underlying architectural choices for many of the GUIs that we work with today.

Of note is that Fowler's entry for MVP has been "retired"; instead, replaced with Passive View (PV) and Supervising Controller (SC)

What I've observed is that PV is more "aligned" with the design of the components in CAB than SC, which Fowler summarizes:

  • The separation advantage is that it pulls all the behavioral complexity away from the basic window itself, making it easier to understand. This advantage is offset by the fact that the controller is still closely coupled to its screen, needing a pretty intimate knowledge of the details of the screen. In which case there is a real question mark over whether it's worth the effort of making it a separate object.

In my opinion, since the CAB generated presenter isn't coupled with a concrete implementation of the view, PV is the way to go since this will allow a lower level of coupling with the concrete implementation of the view. I went about this by adding interface methods to return Control (one could go as generic as Object as well) instances from the view which would then be rendered, wired, and databound by the presenter. In other words, it's not really Model View Presenter as the documentation would have you believe.

This allows a great deal of flexibility in the design of the module as a whole as the view can truly be replaced completely independently of the presenter since the presenter is only coupled with the interface class. In addition, it allows for easy replacement of data visualization types using the adapter pattern to connect data to the control type (for example, having one adapter if the returned control is of type TreeView and another when the control is a DataGridView).

However, an even better description of the architectural intent of CAB modules is Humble Dialog, a pattern formalized by Michael Feathers. What Feathers terms "smart object" is congruent to the presenter, which "pushes data onto the view class" through a view interface. The view, of course, is the actual UserControl derived view class. Now this leaves a key question: what is the place of the model in such a pattern? Does it have a place? With SCSF (but not necessarily with CAB when used with desktop applications), the classic sense of the model almost has no place in such an architecture; the model is but a dumb container for data. It leaves you with what Fowler terms an Anemic Domain Model, an "anti-pattern":

  • The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data.

While Fowler views this pattern with disdain, I can't help but wonder whether it is the most natural choice for a smart client style application which should rely heavily on service layers to handle the data models. Otherwise, it would seem that one would end up writing a great deal of domain objects which were nothing more than shells (well, perhaps this is still useful if complex service interactions are necessary) which make the calls through the service proxies.

I'd like to get more feedback on what the community at large feels about these architectural choices with SCSF and CAB.
Jul 18, 2007 at 5:12 PM
Charles, thanks for starting this thread. I will be following it with great interest, as I am currently exploring the same questions in anticipation of a large CAB-based smart client application we will be developing. I wanted to draw your attention this recent thread, which (while larger in scope) overlaps in part with some of what you've brought up here:

In particular, Chris Holmes describes some of what he is doing design-wise between the presentation and middle tiers. Here's a quick summary of the exchange so you don't have to dig through the other thread:

Chris wrote:

In our shop, we don't use simple POCO objects. In fact, our business objects are rather rich. Our collections are all custom, and inherit from a base class that allows them to be databound directly to grids. We use Manager classes to aggregate business objects together and provide methods for the manipulation of that data (for instance, we have an EmployeeManager object that has child objects of Employee, Departments, EmployeeStatus, EmployeePayRate, etc.)

I responded:

I'm familiar with this pattern. We use it often in our shop. I like to call it the Manager-State pattern. It has it's downsides (as all patterns do), but it's easy for most developers to get their head around. It sounds like maybe you have a "true client/server" design? Is that a correct impression? How middle-tier-centric is your application? Is the middle tier mostly just a pass-through to the database, with your Manager classes and business objects being primarily client-side entities, or do you have a good deal of business logic happening in the middle tier? Also, is your middle tier returning serialized objects that you hydrate and use directly in the presentation tier, or is your middle tier returning more "generic" data structures that are then loaded by your client-side Manager classes into "business objects"?

Relating back to your post, Charles: what I labeled the Manager-State pattern is exactly the same thing that Fowler calls the Anemic Domain Model. I found it interesting that your post proposes that the Anemic Domain Model might be the ideal pattern for CAB development, and that Chris is actually using this pattern in CAB.

Chris responded:

Our middle tier (which I kind of like to think of as the "service" layer, because have a facade over it, with a data access layer underneath it) does return serialized objects, yes. And we do bind their collections directly to datagrids in views (individual properties we map to view elements, so we don't pass whole business objects to views). Our service layer returns our business objects (most of the time the managers), fully baked, as I liked to say (or hydrated if you prefer the term :-)

We don't do any sort of DTO/Message wrapping and mapping or anything of that sort. Initially we were doing something like that, but found it really cumbersome. Then I read about the "wormhole anti-pattern" and that was enough to reaffirm my belief to just send the objects across the wire. I like the simplicity of one object type. And we can afford to do that because we have control over both sides of the wire; we control every piece of the server side code, and all of the client side code. Not every one is as lucky, so they have to deal with web services they can't control, objects they have to map to client side business objects, etc. I'm sure if we didn't have control of our web services and the objects being returned, we'd be forced to do something else.

Our "middle tier" is basically a service facade over our data access classes. We write proxy classes for the client to call to the webservice facade, and we expect to get back business objects or managers from those calls. So from the client perspective, our code looks like this:

EmployeeManager manager = _payrollService.GetEmployee(employeeID);

Inside our service layer we do have some business logic (the rest of the business logic resides in the manager classes themselves). We have strategy patterns that we write to perform certain algorithms, and we make service calls from the client to execute that logic and return altered managers or business objects, their states now changed.

For instance, we have a TimecardManager for the timecard portion of our application, and one of the things it needs to do whenever a user enters hours worked (or overtime, or comptime, or uses sick leave, etc.) is to calculate the overtime earned, if any, and make adjustment to the totals in realtime, because that's what our users wanted. So every time a user enters a new value the TimecardManager accepts it, then we make a service call to an OvertimeCalculator that runs a series of strategies to calculate overtime values and update the TimecardManger's entries. Different agencies calculate things differently, so we have to be able to dynamically run the strategies based on configuration. So we define an Interface for the strategies, and then using configuration (database, or XML) the appropriate assemblies get loaded and the strategies execute, and the results updated accordingly. Since we have different agencies that want different strategies (essentially different business rules) we don't want to deploy all of those, so that's why they get deployed server-side, and run when a service call is made.

Regarding that last paragraph, I had some additional questions for Chris to help me get a clearer idea of exactly what's taking place in the presentation tier and exactly what's taking place in the middle tier (but I didn't ask them on the other thread since I didn't want to torture Chris with any more questions :-).

On a related note, something else I am struggling with is how much "middle tier reuse" we might be giving up by focusing so much business logic in the presentation tier. If anyone can offer insight that connects Charles's questions/points to the related question of how one draws the line between logic that lives in the presentation tier and logic that lives in the middle tier, I would very much like to read it.

Jul 18, 2007 at 5:59 PM
Edited Jul 18, 2007 at 6:07 PM

Thanks for the pointer; will definitely have to dive into that thread.

My current feeling is that for most scenarios involving CAB in the context of SCSF, an Anemic Domain Model (or Manager-State) is perfect suitable and possibly the easiest way to deal with object library explosion (dramatic expansion in the count of classes required to get something done). While CAB/SCSF provides an EntityTranslatorService, it seems like it would just be easier to simply use the autogenerated proxy classes with client side services to contain the actual domain logic.

In Chris' case, where there are perhaps complex relationships and the business logic has to be accessible in a disconnected manner, I could see the value of building an abstraction layer - the Manager classes - which, in his usage, is more like a hybrid Domain Model that blurs the distinction between Anemic and classic Domain Models.

So perhaps the answer to the question really lies in whether the client has to support disconnected operations. It seems to be that the more business logic that is built into a server side service layer (client doesn't support disconnected operation; more aligned with SOA), the more appropriate and relevant an Anemic Domain Model becomes on the client side...I mean seriously, who really wants to maintain and map two representations of the same object (a true Domain Object or Business Entity and the proxy/DTO that is used to communicate with the server)?

Ultimately, I'm in the same boat as you :-) I'd like to read some better discussion on overall architectural guidelines of "the big picture" with the various components of CAB/SCSF.
Jul 18, 2007 at 7:18 PM
Edited Jul 18, 2007 at 7:20 PM
I think you have hit on something here, Charles:

So perhaps the answer to the question really lies in whether the client has to support disconnected operations. It seems to be that the more business logic that is built into a server side service layer (client doesn't support disconnected operation; more aligned with SOA), the more appropriate and relevant an Anemic Domain Model becomes on the client side...I mean seriously, who really wants to maintain and map two representations of the same object (a true Domain Object or Business Entity and the proxy/DTO that is used to communicate with the server)?

Perhaps we need a "split" Manager-State pattern (I have trouble using the Anemic Domain Model moniker because of the way that Fowler's pure OO bias is built into it):

Client Manager  
Jul 18, 2007 at 10:53 PM

but I didn't ask them on the other thread since I didn't want to torture Chris with any more questions :-)

No torture involved :-) I encourage asking, because in this process I learn as much as anyone. I don't want anyone getting the wrong idea here; I am no expert in this area.

I admit to not knowing a lot about the patterns that encompass the domain logic area of our field (I have read portions of Eric Evans' book, but haven't dedicated time to a full reading yet). Fowler mentions in his Anemic Domain Model article that he believes the reason the pattern shows up is because many developers have probably never seen a real Domain Model before. My response is: guilty as charged.

Our particular application has been built from scratch by us. And during that process we tried a few different things and finally fell into something that worked for us. How we got there was largely through an Agile process; we just kept refactoring to take away pain points.

I don't really know if our current way of doing things could be classified as an Anemic Domain Model. I read Fowler's description and our business objects are a lot more than just accessors. We've built databinding support into our strongly type collections, and we augment our collections with specialized methods (behavior) to accomplish specific tasks. Validation is also handled (sort of) by our business objects (via Enterprise Library Validation Application Block, which we're just starting to make use of and really like). I say "sort of" here parenthetically because the real validation logic resides in a Validation class and ValidationAttribute that then gets associated with a property on a business object. This seems to be a pretty common way to handle validation in a WinForms environment because it works well with ErrorProviders. We've written a couple custom validators for our classes and like the way it works with EntLib. So all of that said, I don't know how dumb our business objects really are.

BTW, in the case of the application I have in mind, I am not focussed on disconnected use so much. It's just not a significant requirement in the domain I'm working within. I can imagine some cases where offline use might come up, but I plan to solve those on a case-by-case basis rather than attempting to bake disconnected use into the overall design.

Honestly, I think this is the wisest course of action. Until you can be certain of an off line use, I don't see the point in investing a whole lot of time and effort to making that work.

When we started building our app we thought it would take a more of a traditional smart client shape; we anticipated off line use where data would have to be synced up upon reconnection. That hasn't happened yet (although we do have one portion of the application on the faraway horizon that will require that specific feature). So, we can say with some ashamed look about us that yes, we've done some YAGNI in that regard.

In Chris' case, where there are perhaps complex relationships and the business logic has to be accessible in a disconnected manner, I could see the value of building an abstraction layer - the Manager classes - which, in his usage, is more like a hybrid Domain Model that blurs the distinction between Anemic and classic Domain Models.

Yeah, our managers are a bit goofy in relation to all the domain model terminology, etc. I was swayed in that direction when reading the LLBLGenPro boards, since that's the O/R mapper we use:

We don't use the LLBLGen objects directly - we map them to our custom business objects prior to returning them from a DAO call. The reason for that was to abstract the O/R layer a bit (we eventually want to move to NHibernate and map directly to our own objects, saving ourselves from the only object translation we have to do right now).

The manager classes were born out of a lot of refactoring. They were simply the product of trying to make a large object graph easy to work with in the client application. We would run into use cases where we would need 6 objects, and maybe object B already came loaded with objects C, D, and E. In another use case we might need object B again, but without objects C, D, and E. It seemed like a waste to be querying for a full object graph of B in those cases. Maybe that was nothing to be concerned with, but it swayed us in our design.

So, we wrote mangers, which is a way for us to aggregate together the objects and collections we need to fulfill a use case, which just makes it easier to deal with on the client side. Instead of making five calls to our service to get objects A through E, I can just say "Get me the manager" and get everything I need. I don't know if this is terrible design or not; I know it's the thing that causes us the least pain so far.

I'm curious what a more "connected" architecture looks like, however, from you folks who have more experience working with that. One of our concerns is scalability, because our application will eventually have a lot of users (it's just in internal development now, but eventually it's going to replace a pretty big system currently in use). Part of the reason we went with our architecture and the service layer is because we thought it wise to let IIS handle the load.

I mentioned above our desire to use NHibernate instead, and of course in a disconnected object environment we immediately ran into problems getting it to work. I'd really like to know what a more connected architecture looks like and how it scales, how the calls are made, and just the general flow of data and object creation, lifetimes, etc.

Jul 22, 2007 at 4:21 AM
I would picture these in three separate assemblies. The State Model assembly would be shared by the presentation and middle tiers. Cross-tier transfers would presumably be in the form of serialized State Model objects. The Client Manager objects would be focused on presentation-oriented logic and the Server Manager would be focussed on server-oriented business logic. The design focus of the middle tier would still be service-oriented, but potentially much more than just a wrapper over the data access (O/R mapping) classes. The way I picture it is that client-server interraction would primarily be focussed on a "prepare and submit" model, with a fairly high granularity in the service interface.

The architecture I am currently using is quite similar to this and similar to what Chris Holmes describes above. I also use LLBLGen for O/R mapping, and bind customized LLBLGen objects directly to grids, etc. The application will primarily be used in a client-server fashion, although it is built to be used in a disconnected scenario as well, where laptops or pocketpc are used to view and collect data in the field. The data managers on the server side are primarily concerned with compiling different sets of entities to fulfill a use case, and also dealing with issues like auditing, concurrency and in some cases server-side caching.

The middle tier is primarily just a service facade allowing the client managers to communicate with the server data managers. No DTO are used, full serialized entity graphs are passed over the wire. The client side managers then deal with issues like client-side caching, concurrency and validation. Most of the presentation-related logic is built into Presenter classes, with much care taken to ensure that the presenters are easily reused for other views such as and CF.

I am not sure that this is the 'best' architecture, and I am by no means an expert in design patterns but this is what I have come up with after 1.5 years of reading, refactoring and experimentation. I feel that it is a pretty clean and maintainable solution for this type of application..

I mentioned above our desire to use NHibernate instead, and of course in a disconnected object environment we immediately ran into problems getting it to work.

Just curious Chris, why are you looking to move to NHibernate? I chose to use LLBLGen over NHibernate on this project for precisely the reason you mention.. There are a few things in LLBL that irk me, but on the whole I feel it is a very solid product.
Jul 23, 2007 at 3:02 AM

sefstrat wrote:

Just curious Chris, why are you looking to move to NHibernate? I chose to use LLBLGen over NHibernate on this project for precisely the reason you mention.. There are a few things in LLBL that irk me, but on the whole I feel it is a very solid product.

Don't get me wrong: I think LLBLGenPro is a great OR/M.

But since we map the LLBLGen entities to our own business objects prior to returning them from the service facade and over the wire, I feel like we're losing a lot of the value that LLBLGenPro brings to the table. For instance, if you query the DB with LLBLGenPro, you get an EmployeeEntity back. We then map the fields to our own Employee object (and we have an EmployeeCollection class as well for databinding). Frans has done a great job with his entities, they just aren't exactly what we want.

With NHibernate, I can map directly from the database to my business object. So I can skip the EmployeeEntity and go directly to Employee. But NHibernate works quite differently than NHibernate. NHibernate relies completely on sessions, so you have to be mindful of the session object. LLBLGenPro works in a much more transactional manner which allows you to work with objects in a completely different way (disconnected).

Jul 23, 2007 at 3:59 PM
Edited Jul 23, 2007 at 4:06 PM
I thought I'd jump in to offer my two cents concerning the portion of this conversation pertaining to the construct types used to implement the model within an application.

For business application development, I would recommend reading Microsoft's Application Architecture for .Net: Designing Applications and Services available at: href=" The architecture and terminology described by this document is still foundational to the design principles found within many of the Patterns & Practices approaches.

One of the component types described within the document is Business Entities which simply refers to components used to represent data. Business entities may contain behavior, but are often just data structures used to pass values between layers of an application. Some times Business Entities contain limited behavior reflected in CRUD-oriented methods, but I personally dislike this approach because it blurs what is otherwise a simple data construct with a traditional domain model.

My preferred construct for modeling the data within a business application is a simple business object which contains absolutely no behavior aside from what could be considered behavior inherent to the object itself (which I rarely find the need for, as I write software that typically sells things and services, not that operate things and services).

The reason for this is that I like to maintain a strict separation of concerns in the applications I design through the use of layers where no one layer ever accesses either the layer "above" it or more than 1 layer "below" it. One way to accomplish this is to enforce this behavior by not passing around objects which themselves allow other components within the application to invoke domain logic. While the classic domain model does allow for separation of concerns in the sense that domain logic is kept in the Business Layer of the application (which in this case is the domain model objects themselves), if these objects are passed between the layers then this doesn't allow for the kind of strict layer communication rules I've just described. Their use also invokes the imagery of one layer being passed to another rather than that of data being passed between layers.

Another downside of a classic domain model used within business applications is that it often leads developers to define their object model through intefaces. I would lean more in the direction of using interfaces to define an object model for gaming software, where you might want characters and things to be interchangeable, or writing software that operated machinery, where you might want real world factories or devices to accept interchangeable widgets, but I see the use of interfaces as a disadvantage when used to model the domain data of a business application.

To the extent that a business domain model contains behavior (which again I tend to avoid), I believe it is appropriate to use Interfaces, but only to define the behavior of the model, not to define the entity itself (e.g. A domain model object called Package at Fedex, UPS, DHL, or the USPS should implement an IShippable, not an IPackage). When people define their object models starting with Interfaces it A) leads to the duplication of the data implimentation portion of the model (e.g. reimplementing String properties, lazy initialization, etc. *) which is typically the majority of what business domain models are about, and B) leads to a ridged definition of arguably the most volatile portion of your application, the model, which necessitates that anyone who implements these interfaces much change their code.

That said, it is possible to use a domain model without the overuse (or misuse depending on your perspective) of interfaces, and to maintain layer communication protocol within your application, but this depends more on discipline rather than on constructs which enforce your desired behavior. The typical IT shop today focuses primarily on the products and services they sell (e.g. FedEx ships packages, Dell sells computers, Walmart sells a few items), and much less on the software that supports their business. Because the software is typically not the product in many companies, there is typically pressure to cut corners, met unrealistic deadlines, etc. because it is perceived that it won't have direct impact on the bottom line. This means there is less time for code reviews, refactoring, etc. than we would often like and therefore setting up constructs in the beginning which help to enforce your chosen development practices is a great help.

Another benefit of using simple data objects is that it can serve as a common object model across an enterprise while allowing flexibility in how the object may be used. In many large corporations there exists different systems that operate on the same data, but perhaps in different ways. The core definition of their business entities generally vary less than the methods which operate on that data. By decoupling the domain logic from the domain object, all the systems can potentially use the same set of business objects.

I don't know that there is a commonly accepted term that specifically describes these sorts of simple data objects aside from Martin Fowler's pejorative one, but there should be as I think they hold great benefit in business applications. In a sense, Business Entities which contain no behavior are just Data Transfer Objects, but that term implies that the objects cross application boundries which Business Entities don't necessarily do. The term "Plain Old C# Objects" or POCO doesn't really apply as this is derived from Martin Fowler's POJO objects which are really just your classic Domain Model (i.e. they contain behavior) and therefore they aren't really "Plain" in the sense that they contain only data.

As far as the negative connotations of this approach go, while it may be true that some use this style because the aren't familiar with the classic domain model, others use it because they are. Don't let any particular architect demigod dissuade you from a pattern unless they present a list of liabilities that outweigh the benefits for addressing your particular application.