MVVM pattern is simple

“Once a developer becomes comfortable with WPF and MVVM,

it can be difficult to differentiate the two”

-Josh Smith

It was a long time ago when I’ve become familiar with MVVM pattern after reading the best article ever by Josh Smith.
But, in this post I would like to combine all the MVVM related articles, video materials and of course my own experience with it. I hope that it will be interesting for you and you take as much helpful information as I’m going to describe it here.

Code download available from this link.


Before going deep into the code, lets talk a bit about common concepts of MVVM. First of all lets start with definition what MVVM abbreviation means.

MVVM (Model-View-ViewModel) – first of all it’s MVC (Model-View-Controller) based pattern.

Model – this is an object that was transformed from the real world into application world, if you will. An example of Model:

public class Book
        public string Title { get; set; }
        public string Author { get; set; }
        public int Paperback { get; set; }
        public double Price { get; set; }

View – this is simply UI (User Interface) object that is responsible for displaying the Model. In our case it’s markup-defined object, in other words it’s XAML file.

ViewModelthis is the last but not least player in MVVM that from one hand act as a separator  between the View and Model and from other hand as a controller between two.

Fig. 1 MVVM diagram
Model, View and DataBinding that’s MVVM, everything else – it is all about helpers around MVVM.

Therefore, lets take a look on MVVMs helpers:
INotifyPropertyChanged –  an interface (the mechanism) that will notify View that ViewModel has been updated and vice versa and it always act as notification chain between the View and the ViewModel.
Commandsthis is  one of the most used mechanism to bind actions to view. To implement your custom command you have to implement ICommand interface which expose two methods: Execute and CanExecute. These two methods are really valuable for us and you’ll use them more often than you think. I will describe it latter on in this post but if you wish to learn about it right now, than follow this link. The one thing that you have to keep in mind, that Commanding mechanism is a key element in MVVM pattern. Commands have several purposes:
1) TO SEPARATE the semantics and the object that invokes a command from the logic that executes the command. This allows for multiple and disparate sources to invoke the same command logic, and it allows the command logic to be customized for different targets.
2) TO INDICATE whether an action is available.  A command can indicate whether an action is possible by implementing the CanExecute method. A button can subscribe to the CanExecuteChanged event and be disabled if CanExecute returns false or be enabled if CanExecute returns true.

3) The semantics of commands can be consistent across applications and classes, but the logic of the action is specific to the particular object acted upon.

It is also necessary to admit that without data binding infrastructure, data templates and resource system infrastructure MVVM is nothing. Moreover, WPF was designed to make it easy to build apps using the MVVM pattern.
Well, I hope that I have described all important mechanisms in MVVM Pattern and now you should be able to have this kind of fundamental understanding.
For those who are tired and want to start playing  with simple app right now, don’t hesitate go to this link and start.
Simple app Overview

You have a list of books in the grid. You can load books/add books/remove them.
The app is pretty nice to play a little with all that stuff that we have discussed early in this post. Have a look on UI part of app below:
Fig. 2 Main View.
Fig. 3 Add New Book View.
Well, everything is simple and an important fact – everything is done with MVVM 🙂


Well, I’m not going to put all the project files here, instead of that I’ll try to explain the most important parts of the code.
And these parts are:


The base class below is responsible for implementing INotifyPropertyChanged interface for all existed ViewModels:

public abstract class ViewModelBase : INotifyPropertyChanged
        #region INotifyPropertyChanged implementation

        public event PropertyChangedEventHandler PropertyChanged;

        protected void OnPropertyChanged(string propertyName)
            PropertyChangedEventHandler handler = PropertyChanged;

            if (handler != null)
                handler(this, new PropertyChangedEventArgs(propertyName));

        #endregion //INotifyPropertyChanged implementation

An example of using ViewModelBase in BooksViewModel:

public Book SelectedItem
                return this._selectedItem;
                this._selectedItem = value;

When new value selected the property SelectedItem updated and notified about its changes.

We also have the property below, but without calling OnPropertyChnaged()

public ObservableCollection<Book> Books
            get { return this._books; }
            set { this._books = value; }

It’s because ObservableCollection already implement INotifyPropertyChanged. The notification will be raised when items get added, removed, or when the whole list is refreshed. But, when you are going to change (edit) an existed item in collection you have to use OnPropertyChanged as it’s described in first variant above.


Declaration in ViewModel:

public ICommand LoadBooksCommand
                if (this._loadBooksCommand == null)
                    this._loadBooksCommand = 
                    new RelayCommand(param => this.LoadBooks(), 
                                     param => this.CanBeLoaded);
                return this._loadBooksCommand;

This is how commands are declared in the ViewModel. Relay Command is a simplified variation of DelegateCommand and it allows to inject the command’s logic (in our case LoadBooks() and CanBeLoaded) into its constructor.

Using Commands in xaml:

<WpfToolkit:DataGrid x:Name="BooksDataGrid"                             
  ItemsSource="{Binding Path = Books, Mode=TwoWay, 
  SelectedItem="{Binding Path = SelectedItem}"  



1. MVVM, a WPF UI Design Pattern.
2. WPF Apps With The Model-View-ViewModel Design Pattern. 
3. Advanced MVVM

I hope that I’ve touch most important MVVM concepts in this post, and you could take some advantages and understanding of it.

Please, do not hesitate to ask me if you have any questions or suggestions:


Categories: WPF Tags: , , ,

WPF Styling

WPF makes me angry :)) Well, I was starting to create custom style for the button control. And first in ResourceDictionary was placed my DataTemplate:


—<Button Style=”{StaticResource MyStyle}”/>—


,and than I’ve created the new style for my button:

—<Style x:Key=”MyStyle”>

…..styling stuff 🙂


It doesn’t works…Any idea why ? 🙂

Well, I did everything correct from the point of implementation of style, and assign it to my control. But, my button wasn’t able to find my style…what ?????

First place style and than the control which use your style 🙂

Many times I faced with that situation and many times I have 5-10 minutes Big “Haaaaaaa….?????? What is going on ” 🙂

Categories: WPF

Fluent Interface

I was really impressed by reading Martin Fowler‘s short article about Fluent Interfaces. Actually I was reading about Domain Specific Language that Martin introduced to all of us and spent a long time (about 2 years) to shine the light on mentioned topic and he did it for sure.

You will find in overview to DSL book that DSLs come in two main forms: external and internal.

By Martin:

  • An External DSL is a language that’s parsed independently of the host general purpose language: good examples include regular expressions and CSS. External DSLs have a strong tradition in the Unix community.
  • Internal DSLs are a particular form of API in a host general purpose language, often referred to as a fluent interface. The way mocking libraries, such as JMock, define expectations for tests are good examples of this, as are many of the mechanisms used by Ruby on Rails. Internal DSLs also have a long tradition of usage, particularly in the Lisp community.

Honestly, I was little bit disappointed  with the name of “fluent interface” in description of Internal DSL. Really, just think about well known Mr. Interface in OOP and about ‘Fluent’ word, that stands together, watta hell going’on here ? Yes, I agree that probably you’ve already faced with such meaning, even using this API design in your daily life, but for me it was a new tasty piece of info. Interface could be fluent, OK? Then I read that this kind of meaning Fluent Interface, was a baby boy of two really known geeks all over the IT world – Martin Fowler and Eric Evans the last is the author of a bestseller Domain Driven Design.

Fluent Interface – it is an approach to build more readable API by that actually based on method chain approach, and in fact it can be used with any kind of object oriented language.

In .NET languages you faced with LINQ expressions, and you probably using mock frameworks as Martin mentioned and this is actually API designed methods that are fluent interface oriented.

In context of Fluent Interface the API is primarily designed to be readable (like a sentence )

Generally, Fluent Interface approach will be looks/designed like the following one:

Patient patient = new Patient(){PatientId = 001};


This primitive example is showing us how Fluent Interface is look like. The Fluent Interface is representation of Internal DSL.

In Conclusion I’d like only to add, that I’m really like DSL and related things and I’m going to become familiar with that. In fact, after some time, I could change something in this post in case of more knowledge and experience. Nevertheless, I wrote about Fluent Interface and would be happy to hear your opinion regarding mentioned topic.

I will  extend  current post with more examples in my next posts regarding DSL.

Categories: Uncategorized Tags: ,

WPF Master Page and Dependency Properties in Action

Our Task for This Post:
To implement an application (wizard) that looks like the mockup below:

Fig1. Application’s mockup.
As you could see, we’d like to have Title P-holder, Message Bar P-holder, Content P-holder and Footer P-holder.

Everything is clear for now, lets go to do our task.

There is no Master Page (MP) concept implemented in WPF(Windows Presentation Foundation) and XAML(Extensible Application Markup Language). Well, the main point of this post is to show you how to create MP, and the last but not least – understand what is Dependency Properties(DP) are. I’m quite sure that MP example – is a good way to explain what DP are. Taking into account what I said before we’ll kill two birds with one stone. So, here we go.

WPF Master Page Common Vision
There is nothing special or/and unusual in the MP concept – everything is simply done. The MP structure is the following one:

Fig.2 WPF Master Page Common Vision

The top element of diagram is WPF Master Page object, and our pages ( Page(1) , Page(n) ) that are derived from them, and they have the same style and content, they are twins 🙂

The MP in our case consist of from three content – components:

1.Master Control.
2.Master Page Template.
3.Page Template Style.

Fig.3 WPF Master Page Overview

Master Control

The main role in our MP take Dependency Properties. Well, What is that?

From Matthew MacDonaldPro WPF in C# 2008 Windows Presentation Foundation with .NET 3.5“: “Dependency properties are a completely new implementation of properties—one that has a significant amount of added value. You need dependency properties to plug into core WPF features, such as animation, data binding, and styles. Most of the properties that are exposed by WPF elements are dependency properties. In all the examples you’ve seen up to this point, you’ve been using dependency properties with- out realizing it. That’s because dependency properties are designed to be consumed in the same way as normal properties. Dependency properties are a completely new implementation of properties—one that has asignificant amount of added value. You need dependency properties to plug into core WPFfeatures, such as animation, data binding, and styles.Most of the properties that are exposed by WPF elements are dependency properties. Inall the examples you’ve seen up to this point, you’ve been using dependency properties with-out realizing it. That’s because dependency properties are designed to be consumed in thesame way as normal properties.”

I highly recommend this book for those who are just start to work with this BRILLIANT technology.

So, enough words, lats write some code to give a live to our MP.
Now, you have some understanding what DP is, and we could continue with the practical part.

From the code snippet below you could see that we have three dependency properties, namely: TitleProperty, ContentProperty and FooterProperty.


    public class Master : Control
        public static readonly DependencyProperty TitleProperty =
            DependencyProperty.Register("Title", typeof(object),
            new UIPropertyMetadata());

        public static readonly DependencyProperty MessageBarProperty =
            DependencyProperty.Register("MessageBar", typeof(object),
            new UIPropertyMetadata());

        public static readonly DependencyProperty ContentProperty =
            DependencyProperty.Register("Content", typeof(object),
            new UIPropertyMetadata());

        public static readonly DependencyProperty FooterProperty =
            DependencyProperty.Register("Footer", typeof(object),
            new UIPropertyMetadata());

        /// <summary>
        /// Initializes the <see cref="Master"/> class.
        /// </summary>
        static Master()
                new FrameworkPropertyMetadata(typeof(Master)));

        /// <summary>
        /// Gets or sets the title.
        /// </summary>
        /// <value>The title.</value>
        public object Title
            get { return GetValue(TitleProperty); }
            set { SetValue(TitleProperty, value); }

        /// <summary>
        /// Gets or sets the MessageBar.
        /// </summary>
        /// <value>The message bar.</value>
        public object MessageBar
            get { return GetValue(MessageBarProperty); }
            set { SetValue(MessageBarProperty, value); }

        /// <summary>
        /// Gets or sets the content.
        /// </summary>
        /// <value>The content.</value>
        public object Content
            get { return GetValue(ContentProperty); }
            set { SetValue(ContentProperty, value); }

        /// <summary>
        /// Gets or sets the footer.
        /// </summary>
        /// <value>The footer.</value>
        public object Footer
            get { return GetValue(FooterProperty); }
            set { SetValue(FooterProperty, value); }

Each property represents one area in our Master Page. The type for the dependency properties should be an Object. This ensures that we can add different types of controls (TextBox, Grid, StackPanel, Button etcetera) to each area on the page.

Master Page Template

• WPF doesnot add layout information into the class implementing a custom control like in our case – control Master. The content of the file generic.xaml defines the look of the control (template). This file will be automatically created by Visual Studio as soon as you add a custom control to your project.

Fig.4 Add new custom control

Note. In our case we aren’t creating Master Page custom control, automatically, we do that manually. The file generic.xaml, mentioned above, must be inside of the folder Themes, otherwise generic.xaml will be unaccessible.


<ResourceDictionary xmlns=""

        <ResourceDictionary Source="../Themes/Master.xaml" />

<Style TargetType="{x:Type MasterPage1:Master}">
<Setter Property="Template">
<ControlTemplate TargetType="{x:Type MasterPage1:Master}">
        <Border CornerRadius="5">
                <SolidColorBrush Color="WhiteSmoke" />

    <Grid ShowGridLines="False" Margin="5">
            <ColumnDefinition />

        <Grid Grid.Column="0">
                <RowDefinition Height="Auto" />
                <RowDefinition Height="*" />
                <RowDefinition Height="60" />

            <!--TITLE PLACE HOLDER-->
            <ContentPresenter Grid.Row="0" 
 Content="{TemplateBinding Title}"
 Style="{StaticResource TitlePlaceHolderStyle}" 

            <!--MESSAGE BAR-->
            <ContentPresenter Grid.Row="1" 
 Content="{TemplateBinding MessageBar}" 
 Style="{StaticResource MessageBarPlaceHolderStyle}" 

            <ContentPresenter Grid.Row="1" 
 Content="{TemplateBinding Content}" 
 Style="{StaticResource ContentPlaceHolderStyle}" 

            <ContentPresenter Grid.Row="2" 
 Content="{TemplateBinding Footer}" 
 Style="{StaticResource FooterPlaceHolderStyle}" 


Page Template Style

• Finally, we are able to separate the styles of Title, Message Bar, Content and Footer, into separate file, to make our template source code (generic.xaml) more understandable and good-looking 🙂


<ResourceDictionary xmlns=""

	<Style x:Key="TitlePlaceHolderStyle" 
 TargetType="{x:Type ContentPresenter}">
		<Setter Property="Control.FontSize" Value="24" />
		<Setter Property="Control.FontFamily" Value="Verdana" />
		<Setter Property="Control.FontWeight" Value="Bold" />
		<Setter Property="Control.Foreground" Value="#FFC9CBCC" />
		<Setter Property="Margin" Value="0,0,0,10" />

	<Style x:Key="MessageBarPlaceHolderStyle" 
 TargetType="{x:Type ContentPresenter}">
		<Setter Property="Control.Height" Value="50" />
		<Setter Property="Control.VerticalAlignment" Value="Top" />
		<Setter Property="Margin" Value="0" />

	<Style x:Key="ContentPlaceHolderStyle" 
 TargetType="{x:Type ContentPresenter}">
        <Setter Property="Control.Background" Value="Transparent" />
        <Setter Property="Control.MaxWidth" Value="760"/>
        <Setter Property="Control.MaxHeight" Value="515"/>
		<Setter Property="Control.VerticalAlignment" Value="Top" />
		<Setter Property="Control.HorizontalAlignment" Value="Left" />
		<Setter Property="Margin" Value="0,50,5,0" />

    <Style x:Key="FooterPlaceHolderStyle" 
 TargetType="{x:Type ContentPresenter}">
        <Setter Property="StackPanel.HorizontalAlignment" Value="Right"/>
        <Setter Property="StackPanel.Orientation" Value="Horizontal"/>
        <Setter Property="Margin" Value="10"/>


So, we are done with our task.
Let’s see what we have in conclusion.

Fig.5 Master Page in action.

Let’s take a look on XAML:

<UserControl x:Class="WPFMasterPage.UserControls.FirstPage"

			Personal Data

			<MessageBar:MessageBar x:Name="PersonalDataMessageBar" 
 Height="20" Foreground="Red" 
 FontFamily="Verdana" />

						<RowDefinition Height="*" />
						<RowDefinition Height="*" />
						<RowDefinition Height="*" />
						<RowDefinition Height="*" />
						<ColumnDefinition Width="*" />
						<ColumnDefinition Width="*" />
						<ColumnDefinition Width="360" />

					<!--Full Name-->
					<TextBlock x:Name="FullNameTextBlock" 
 Text="Full Name:" FontFamily="Verdana" 
 FontSize="14" FontWeight="Normal" 
 Margin="5" Grid.Column="0" />

					<ComboBox x:Name="MrMrsTextBox" 
 FontFamily="Verdana" FontSize="12" 
 Width="55" Height="20" 
 Margin="5" VerticalAlignment="Center" 
 SelectedIndex="0" Grid.Column="1">

					<TextBox x:Name="FullNameTextBox" 
 FontFamily="Verdana" FontSize="12" 
 Width="350" Height="20" Margin="5" 
 Grid.Column="2" />

					<TextBlock x:Name="EmailTextBlock" 
 Text="Email:" FontFamily="Verdana" FontSize="14" 
 FontWeight="Normal" VerticalAlignment="Center" 
 Margin="5" Grid.Row="1" Grid.Column="0" />

					<TextBox x:Name="EmailTextBox"
 FontFamily="Verdana" FontSize="12" 
 Width="350" Height="20" 
 Margin="5" VerticalAlignment="Center" 
 Grid.Column="3" Grid.Row="1" />

					<TextBlock x:Name="LinkedInTextBlock" 
 Text="LinkedIn:" FontFamily="Verdana" 
 FontSize="14" FontWeight="Normal" 
 VerticalAlignment="Center" Margin="5" 
 Grid.Row="2" Grid.Column="0" />

					<TextBox x:Name="LinkedInTextBox" 
 FontFamily="Verdana" FontSize="12" 
 Width="350" Height="20" 
 Margin="5" VerticalAlignment="Center" 
 Grid.Column="3" Grid.Row="2" />

					<TextBlock x:Name="BlogTextBlock" 
 Text="Blog:" FontFamily="Verdana" 
 FontSize="14" FontWeight="Normal" 
 VerticalAlignment="Center" Margin="5" 
 Grid.Row="3" Grid.Column="0" />

					<TextBox x:Name="BlogTextBox" FontFamily="Verdana"
 FontSize="12" Width="350" Height="20" 
 Margin="5" VerticalAlignment="Center" 
 Grid.Column="3" Grid.Row="3" />

			<StackPanel Orientation="Horizontal">
                <Button x:Name="btnNext" 
 Content="Next" Width="100"
 Height="35" FontWeight="Normal" 
 TabIndex="2" Click="btnNext_Click"/>


Well, folks, it was my version of Master Page, hope you like it 🙂

I’ll upload source code latter on, hopefully till the end of this weekend.

Categories: WPF Tags: ,

Trying to understand how to use wordpress :)

I’ve spend 1 hour to find out, how to edit blogroll 🙂

Categories: Uncategorized

Human Computation and Web 2.0

Julian Ustiyanovych (Human Computation)  & J. P. D. (Web 2.0)
Hochschule Bremen – University of Applied Sciences.

1 Introduction

The impact of the Information Revolution in our society has been felt in many aspects and its strength is probable comparable only with that of the Industrial Revolution. Nowadays, even very young children are already capable of operating computers and accessing the information promptly available over the Internet, in many cases excelling their parents and teachers. However, the use of a computer by itself does not guarantee the effectiveness of the learning process or a quality of information. Therefore, if we talk about the possibility to develop methodologies that are going to be useful even in developing countries, we must concentrate on technologies that are available in all types of platforms, based on the most democratic field: the Internet. This

is the case of the so-called “Web 2.0” tools.

“The term ‘Web 2.0’ was officially coined in 2004 by Dale Dougherty, a vicepresident of O’Reilly Media Inc. (the company famous for its technology-related conferences and high quality books) during a team discussion on a potential future conference about the Web (O’Reilly, 2005a). The team wanted to capture the feeling that despite the dot-com boom and subsequent bust, the Web was ‘more important than ever, with exciting new applications and sites popping up with surprising regularity’ (O’Reilly, 2005, p. 1).”

The web 2.0 technologies are low cost, easy accessibility on many simple platforms and the potential impact of collaborative content production and peer-review processes to improve the quality of learning and collaborative aspects. Furthermore, due to the evident familiarity that all computer users have with browsers, it seems that a technology that is Internet based is the most promising.

The web 2.0 technologies provide interactive collaborative facilities, such as Wiki pages (where the user can edit the content), Weblogs (or Blog, multi-owner pages, where the user can interact with comments or posts), Syndication (RSS, Atom), Social Networking Systems (as MySpace, Facebook, Orkut), Social Bookmarking (as, digg), Media-Sharing (as YouTube, Flickr), etc. Despite some clear advantages brought by the spread production and integration of information, the web 2.0 phenomenon can be considered as a “new technology”, whose contributions to educations have not been well explored yet.

Teachers around the world are nowadays experiencing a challenge. When a research assignment is presented to students, what inevitably happens is that they will search for content over the Internet. Although this process might be healthy and students can acquire the content while researching about it, the ease with which content can be simply copy &amp; pasted potentially improves the chances of poor learning results. The

question is not anymore how to prevent numb copy and pasting to happen, but how to leverage the possibilities of this new environment to improve the student’s cognitive processes, and how it could be use for a greater good.

In this context, we can see that we are presently experiencing a lack of methodologies that dictate the appropriate use of this interactive environment for specific teaching or collaborative goals, despite of some unstructured attempts to do so. A good example can be seen in an article [2] by Jessica Mints entitled “Wikipedia becomes class assignment”. She reports an experiment where a professor gave students an

assignment to feed Wikipedia [3] (probably the most famous website based on web 2.0) with new content, in place of an ordinary research where the students would copy from there. There are some universities that adopted web 2.0 tools, but it is not clear how they should be used to effectively enhance the learning process.

But there is more we can do toward to a mass-collaborative environment that is the case of the “Human Computation”.

2 Human Computation
2.1 Introduce of HC

Going further in the web 2.0 field, we can find the so-called “Human Computation” concept. In traditional computing, the human uses the computer to solve a problem: he (or she) provides a formalized description of the problem to the machine and receives the solution to be interpreted. In human computation, the roles are often reversed: the computer asks the person or a group of people to solve the problem, then collect,

integrate and interpret the outcome to the solution. A good definition about “Human Computation” can be found at the Clive Thompson’s article to Wired Magazine [4], which also includes the name of Prof. Dr. Luis von Ahn (of Computer Science at Carnegie Mellon University, expert on Human Computation):

“The art of using massive groups of networked human minds to solve problems that computers cannot. Ask a machine to point to a picture of a bird or pick out a particular voice in a crowd, and it usually fails. But even the most dim-witted human can do this easily. Von Ahn [5] has realized that our normal view of the human-computer relationship can be inverted. Most of us assume computers make people smarter. He sees people as a way to make computers smarter.”

The most popular example about Human Computation is the “Wikipedia”, an on line encyclopedia where anyone can edit, add, correct pages, and so on. In the first part of this paper we pointed an example about how the use of Wikipedia could enhance the knowledge, but of course, the Human Computation is not only the Wikipedia, there are some others tools and even games that can be included in this field: CAPTCHAESP GamePeekboomVerbosity etc.


As you understand before, at the present time we found the solution how to solve a daunting tasks such as extending data base of artificial intellect for using these data for algorithms such as: Vision Algorithms etc. by the CAPTCHA(Completely Automated Public Turing test to tell Computers and Human Part ) program.

Examine approaches

Actually, a CAPTHCA is a program that can generate and grade tests that: (A) most humans can pass, but (B) current computer programs can’t pass [6]. A paradox here: the CAPTCHA this is a program that can generate the test and can’t pass it buy they self, but this is a main idea of CAPTCHA. In this way a CAPTHCA like professor, prepare the test for students and can’t pass it for the students.

Such a programs as Yahoo!, Gmail, Hotmail etc., can be used to differentiate humans from computers and has many applications for practical security, including (but not limited to):

– Free Email Services. First I want to ask the question. How many of you fill out registration form for something like: Yahoo, Hotmail, Gmail etc.? I am sure that 99.9 per cent of humans in the World have to be contacted with these registration forms, face to face a few times. Several companies offer free email services such as I reminded above and more and more others, most of which suffer from a specific type of attack: “bots” that sign up for thousands of email accounts every minute and in a few hours, for example Google servers can “die” and return from the dead every other hours, and in this case if there 91.6 million users

( [10] than that 91.6 million users would be affected and paralyzed. This situation can be improved by requiring users to prove they are human before they can get a free email account. Google for instance, use a CAPTCHA to prevent bots from registering for accounts. Their CAPTCHA asks

users to read a distorted word such as the one shown below (in fact current computer programs are not as good as human at reading distorted text).

– Preventing Dictionary Attack. Pinkas and Sander [7] have suggested using CAPTCHAs to prevent dictionary attacks in password systems. The idea is simple: prevent a computer from being able to iterate through the entire space of passwords by requiring a human to type the passwords.

Example a CAPTCHA in Active

The images below represent us the example of how is CAPTCHA working. Picks random string of letters “pump” and then renders it into a distorted image:

Fig.1 Distorted Image.

When we are done with above steps, the next step is a program generate a test regarding to our word – pump and ask a user to type a characters that appear in the image.

This paper not about CAPTCHA application, but about Human Computation. In this case let me show you other examples that performs and show us main ideology of Human Computation in other hand.

2.3 ESP GAME (Labeling Images with words)

Image on the Web present a major technological challenge. There are millions of them; there are no guidelines about providing appropriate textual descriptions for them, and computer vision hasn’t yet produced a program that can determine their contents in a widely useful way.[8] However, accurate descriptions of images are required by several applications like image search engines (Gmail, Yahoo! etc.) and accessibility programs for the visually impaired.

We could go to and type word dog, the results will show us many pictures of dogs, that’s works by uses: file names and HTML text. But the problem of that method that it’s working very well. We could take our personal picture and give it a name like a dog and we will be in the list when some one type in Google word dog.

The only method currently available for obtaining precise image descriptions is manual labeling, which is tedious and thus extremely costly. But, what if people labeled images without realizing they were doing so? What if experience was enjoyable? How we can do that? [3].

The answer on these questions the following: We can use humans, but we should use they CLEVERLY. Normally if we asked people recognize images we must pay them a lot of money for this work. ESP Game approaches is much better. The ESP Game for people who really-really like to play. ESP Game have really nice properties:

As people play the game the labels generate for images.

– As people play the game, they actually labeling the images very-very fast.

If ESP Game deployed at a popular gaming site and/or added it to such messengers as: ICQ, MSN, AOL, Yahoo! etc. and if people play it as much as other online games, developers of ESP Game estimated that most images on the Web can be properly labeled in a matter of weeks.


We call our system “the ESP game” for reasons that will become apparent as the description progresses. The game is played by two partners and is meant to be played online by a large number of pairs at once. Partners are randomly assigned from among all the people playing the game. Players are not told who their partners are, nor are they allowed to communicate with their partners. The only thing partners have in common is

an image they can both see. [3]

From the player’s perspective, the goal of the ESP game is to guess what their partner is typing for each image. Once both players have typed the same string, they move on to the next image (both player’s don’t have to type the string at the same time, but each must type the same string at some point while the image is on the screen). We call the process of typing the same string “agreeing on an image” (see Figure 4).

Figure 2. Partners agreeing on an image. Neither of them can see the other’s guesses.

Partners strive to agree on as many images as they can in 2.5 minutes. Every time two partners agree on an image, they get a certain number of points. If they agree on 15 images they get a large number of bonus points. The thermometer at the bottom of the screen (see Figure 2) indicates the number of images that the partners have agreed on. By providing players with points for each image and bonus points for completing a set of images, we reinforce their incremental success in the game and thus encourage them to continue


Players can also choose to pass or opt out on difficult images. If a player clicks the pass button, a message is generated on their partner’s screen; a pair cannot pass on an image until both have hit the pass button. [3]

Since the players can’t communicate and don’t know anything about each other, the easiest way for both players to type the same string is by typing something related to the common image. Notice, however, that the game doesn’t ask the players to describe the image: all they are told is that they have to “think like each other” and type the same string (thus the name “ESP”). It turns out that the string on which the two players agree is typically a good label for the image, as we will discuss in our evaluation section. [3]


Here is other game-example how we can recognize not only picture originally, but the objects which are located in a picture. Let think about our day when we get up and doing or/and going something/somewhere.

All the time, we observe. We could recognize everything what we see in a moment with little effort. Computers, on the other hand, still have a trouble with such basic visual tasks as reading distorted text or finding where in the image a simple object located.

Most of the best approaches for computer rely on machine learning: train an algorithm

to perform a visual task by showing it example images in which the task has already been performed. For example, training an algorithm for testing whether an image contains a dog would involve presenting it with multiple images of dogs, each annotated with the precise location of the dog in the image. After processing enough images, the algorithm learns to find dogs in arbitrary images. A major problem with this approach,

however, is the lack of training data, which, obviously, must be prepared by hand [9], by Human Computation. In this case researcher – Prof. Dr. Luis von Ahn found how to solve that problem – using people and of course using – CLEVERLY.

Peekboom improves on the data collected by the ESP Game, and for each object in the image, outputs precise location information, as well as other information useful for training computer vision algorithms. By playing a game, people help to collect data not because they want to be helpful, but because they have a fun when they playing and regarding that they are relaxing, please note they are not working and in this case it is

really helpful for training vision algorithms. [4]

Figure 3.

Peek and Boom. Boom gets an image along with a word related to it, and must reveal parts of the image for Peek to guess the correct word. Peek can enter multiple guesses that Boom can see.



There are many pluses of human computation and Web 2.0 for solving problems such as intensive and high-level trainings vision algorithms in short period. In fact that day by day we are increasing more staying face to face with artificial intelligence.

Can computers think? Well, the theoretical physicist Prof. Dr. Michio Kaku [11] would

answer, “Not now. But in the future…”

It supposes that if the people will be using Human Computation as an approach for teaching artificial intelligence, in fact the future will be come promptly as Prof. Dr. Michio Kaku think.

“When your birthday? I never had a birthday…sad David” A.I. Film.

A.I. takes place at an unspecified date in the future, and tells the story of David, a mecha programmed with the ability to love. [13] It thinks like a real live-child, has emotion, ability to do many things, and can imitate love like a real child to his mother. Certainly that is fantastic and the people can’t create the mecha boy like David, which behave like a real child.

We could train artificial intelligence and like a mechanism for teaching in some way we could use Human Computation. Looking further in this concept, we can think about the time that will be possible to a kind of “David” will say to you “Hi…”- knocking in your home-door soon.


As you understand for training such algorithm as vision in our example we need to involve a lot of humans’ brainpower. Taking in to account that our live it’s not a game, but something more complicated where we should working, studying, and spending some time with our family, friends etc., we can’t playing in ESP Game or Peekboom etc 24/7 like some persons from top list of ESP Game.


We believe that artificial Intelligence have much things to analyzing and realizing. The Captcha, ESP Game and Peekboom Game, it’s a big step of humans – to create computer that will have the possibility to recognize and think like a human being.

Looking in this way, Luis von Ahn become the main and “revolutionary” reference when the subject is Human Computation, because he did not just created a tool as the labeling image (in this example, the ESP Game), but make it in an enjoyable way, instead of hiring people to create algorithms in a boring environment; through his games a lot of voluntaries do their duty as players, not as programmers or technical


This kind of ability and social engagement are important keys for create a good and effective social-collaborative tools, toward an expressive improvement of computational algorithms, and consequently developing of Artificial Intelligence.


1. What is Web 2.0? Ideas, technologies and implications for education.

3. Wikipedia.

5. Luis von Ahn’s website:

6. Luis von Ahn, Manuel Blum, Nicholas J. Hopper and John Langford. “CAPTCHA:

Using Hard AI Problems for Security”.

7. Benny Pinkas and Tomas Sander. Securing Passwords Against Dictionary Attacks.

In processing of the ACM Computer and Security Conference (CCS’ 02), pages 161-

170. ACM Press, November 2002

8. Luis von Ahn and Laura Dabbish. Labeling Images with a Computer Game.

9. Luis von Ahn, Ruoran Liu and Manuel Blum. Peekboom: A Game for Locating

10. Mark Evans blog.

11. Prof. Dr. Michio Kaku.

12. Tech TV Vault.


13. A.I. Artificial Intelligence.