Infosys Microsoft Alliance and Solutions blog

« October 2009 | Main | December 2009 »

November 30, 2009

Dallas – Information as a Service

     We are witnessing information explosion over the internet, tons of information is getting accumulated in. However, we still struggle to get the “Accurate and Authentic Data”. Have you ever needed the zip code of a city, route to reach a place, dining menu of restaurants, weather forecast and history, crime rates in a specific area of city? This list just goes on. How do we get this data? Search this information on our favorite search engines and rest in peace when we find it!! But, do we really know whether the data which we got is really accurate?! It could be stale, misleading or just plain inaccurate!! Why can’t you get information as easily as you can get a size 40 Creamy white Louis Philippe shirt or maybe a striking green 8 GB IPOD shuffle; because INFORMATION is not commodity yet!! 

     In contrast you may come across a lot of mash up applications which consume data from genuine sources and present it to the end user in his context. This data is not only reliable but also closely up to date in terms of its accuracy.  The specialized organizations or vendors, who are responsible to publish data in some predefined format so that it can be consumed by a variety of applications, are called as Content Providers. Content providers expose their data in the form of feeds or messages abiding by some standards so that the consumer can easily make use of the data.
     So now we know who can give us accurate and authentic data. Next, we need to find content providers who are specialized in providing the contents to be consumed in our applications. This data is so important for applications or businesses that, we are ready to pay for the usage of the data but the challenge is to filter through numerous content provider sites again via the search engine to look for the appropriate ones. Why can’t we have something or someplace which is like a one stop shop for all popular Content Providers?
Well our prayers have been answered!! Ever since Microsoft has announced Azure Services, they are updating and trying hard to make their offerings richer. In PDC 2009, Microsoft has announced a project code named “Dallas” which addresses all the above pain points for businesses. In the world of X as a Service, you can call out “Project Dallas” with “Information as a Service” but in the true sense it is much more than that. Microsoft made the announcement about Project Dallas in PDC 2009 and further details can be found here introducing Dallas.
     Project Dallas is an information market place; yes it is a market place. Dallas brings information together into a single location. Be this information in any form like images, documents, real time data etc. Unified provisioning and billing framework are very important characteristics of this information market place. Dallas provides required setup for a market place and brings together the content providers and the consumers. As most of us have desired for to trade information like a commodity, it seems to be here. Discovery is another key distinctive of any market place.  Dallas provides the discovery services to be able to find content providers for a specific business domain. Contents can be viewed on the portal itself to get a quick snapshot of the information.
At this point let me point out another issue in our search for information. Once we finalize on the Content Provider we need to evaluate the interoperability efforts and the plumbing to be done in our application to consume the data. And what if we want to change or add a new content provider? Do we have to go back to do the plumbing again?  Sounds very cumbersome, right? Looks like Microsoft has been thoughtful enough to sort this aspect as well. Dallas has very simple provisioning process where a consumer needs to discover and select the provider, take a unified view of the data on Dallas itself and start using and benefitting from it. Dallas supports an inbuilt billing framework to provide consumption details to consumer as well as revenue details to the content providers.  So bye bye cumbersome and proprietary plumbing!!
     From the business user’s point of view, the dataset from the content providers are rendered within the Dallas portal along with a very powerful analytics tool “Power Pivot”.  As an independent analyst, you buy a subscription, use power pivot and take home what you need. It’s that simple. Dallas provides capabilities to bring disparate data at one place to slice and dice the data, analyze it eventually empowering you to make decisions. Analytics capabilities are extended to consume these contents from within Excel using Power Pivot, Microsoft Office and SQL Server for rich reporting and analytics. Your transactional data combined with Reference data brought from Dallas gives broader data points for analysis.

Dallas - High Level Architecture

Dallas High Level Architecture

*Source: Microsoft PDC 2009 Presentation on Project Dallas

     The most important part of “Project Dallas” is its integration capability in application development. Once you select the content provider, Dallas would provide you a proxy class to consume the contents. Download this proxy class refer it in your code and in three lines of code in C#; you are ready to use the information from the content providers. That’s it!! With the same ease you can shift to any other content provider available at “Dallas” market place, so no worries of locking and reinvesting efforts to integrate with other content providers. Doesn’t it sound like getting the information as a commodity and use it at our ease. Technically- since the dataset format received through proxy class from Dallas content partner is similar, only proxy class should change and further implementation should not be affected much. Till now the proxy class is available only in C# and it shouldn’t be a problem to provide it for popular programming languages and frameworks. Dallas is a cross platform server hence the contents from Dallas can bring the disparate data together to widen the analysis measures on varied platforms like Windows, Apple iPhones etc.
     Microsoft hasn’t forgotten the Content Providers. If you as a Content Provider have premium content, Dallas is the place to go and publish your contents. Dallas provides content partnering business opportunities to businesses to publish their contents, make it available to consumers and offload the licensing and billing management to Dallas. Content providers have to register themselves as partners, publish their data as web service and they are ready with their business.  It appears to be a quick setup business model for content providers. 
    Come Dallas and we can very easily develop creative mash up applications. Imagine applications mashing contents like maps, weather forecast and history, crime rates, new business, news feeds, reviews, real estate contents, Business relevant feeds from research firms ….. And lot more.  On the other hand you can use these contents to form a new and enriched analytical measure on your business data. Dallas is going to ”COMMODITIZE INFORMATION”, which can be consumed independently for better impact. Combined with your own transactional data to this is definitely going to raise analysis to a new level so that hitherto tricky questions like-  How are transactions impacted because of crime rate in a neighborhood or What are the reviews of your store with respect to service under a specific manager’s shift will be answered with great ease!!

Windows 7 in a server 2008 environment – the best choice for branch administration

The Branch site/PoP site administration always had its own contentious points on its RoI and efficiency. So far there was no inherent support from MS on this, except GPO based throttling and BITS. With Win7 and server 2008, MS proves its focus on helping enterprises with cost-effective and efficient solutions for branch administration to increase the user productivity.

Branch caching is a new feature available in Windows 7 ideal for a typical branch office. It is integrated with BITS and is capable of caching http, https and SMB. From a security standpoint, it works seamlessly with SSL, IPSec and SMB signing. Also the solution is flexible for branch offices with/without local servers. Branch Caching should be looked upon as a feature which can complement the existing infrastructure. It brings in more value when used in conjunction with technologies like DFS, SCCM distribution, web servers etc. It is no surprise that many of the early adopters for Win 7 are looking at branch caching as a prospective solution for their branch user computing.

Together with windows server 2008, enterprises can provide more efficient solutions to their branch office users. The new active domain service in server 2008 supports read only domain controllers (RoDC) which can be of great use in offices with limited physical security. The addition of RoDC in a branch environment can substantially improve the user experience through faster authentication and authorization. Windows Server 2008 R2 comes with significant improvements in TCP/IP stack along with SMB 2.0 and other enhancements in file systems. These changes are focused on improving the efficiency and reducing the latencies, delays and utilization on WAN links. ------ Yogesh K G - Windows 7 / Windows server 2008 Architect

November 27, 2009

Win 7 - Touch

Few weeks back we got access to HP's Touch Smart laptop with Win 7 and that was our first real exposure to working with Touch. Touch isn't something of a science fiction anymore and most new mobile devices support touch capabilities, though not necessarily multi-touch. Microsoft Surface  was a very interesting innovation in this space, but was limited to some extent on its usage due to factors like restricted availability of surface table, its cost, and it being horizontal and hence not suitable for many business applications.

Windows 7 now natively supports multi-touch and if you have the right hardware like HP touch smart laptop, you can start to program some real cool applications. When I first read about WPF 4.0 will support touch at framework level, I wasn't sure what it meant and hence I decided to spend some time understanding this a bit more. Here I am not going deep into any of these aspects, but just want to highlight a few important aspects of touch and how to program for it.

The first question really is what is multi-touch and how is this different from say the stylus based Tablet PCs. Simplistically put, multi-touch literally means ability to handle multiple touch points at the same time. So if you can put more than 1 finger on the device, it is able to recognize all of them and provide these via relevant APIs for programmatic manipulation. To add to this will be ability to recognize gestures like pan, zoom, rotate etc and again providing suitable API backing for applications to program against them.

The next question that needs some digging into is, do I really need to handle any specific messages/API? When I run existing applications on Win 7 touch device, I am able to get basic manipulation working without having done anything. The point to understand here is that the underlying drivers and OS do provide for touch specific information, but since many applications (or rather almost all applications as of today) aren't touch aware, they still need to be responsive in some sense. Hence the touch messages are eventually promoted as regular mouse messages. It is due to this behavior that most applications will continue to work without any specific handling of touch. However the response may not be very good and at times seem jerky. Also you may end up with unwanted behavior like in one of our apps, we had a touch and move behavior, but while moving, the finger would go over another control and it would react even though it wasn't supposed to. The mouse event promotion probably triggered the other control to also react.

To address this an important aspect is ability the suspend mouse event promotion. The high level behavior hence will be to capture touch down event, suspend mouse event promotion, continue to handle touch related events to provide for pan, zoom, flip, rotate etc and finally handle touch up to revert to normal behavior. In our application we did this and we had much better touch response from the application (apart from the hardware issues where the touch calibration will frequently go out of synch).

Applications like IE 8 and Paint are already touch aware and if using them on Win 7 with touch hardware, you can play around with them. IE will allow zoom in-out of content via touch and paint will allow you to draw with multiple fingers. To know more about touch support, check these links

  1. Windows Touch
  2. Touch support in Silverlight

Finally, one last aspect is do I need to do something different when designing for touch aware applications. One answer to this is already discussed in previous question in terms of WM_TOUCH message handling. Some of the other aspects are to create larger controls so that touching is easier and the application can better respond to it. Also note certain behaviors will also have to change like you can no longer program for mouse hover and tooltip kind of user experience as these won't work well with touch interface. When using stylus, it does offer proximity sensor and when it is close to the screen, it can behave like mouse hover, but not with regular touch. You can check out more design guidance on touch applications here. Another document of interest will be How to Design and Test Multitouch Hardware Solutions for Windows 7.

The various links that I have provided above should help you get started on touch journey. There is also an interesting session from PDC 2009 on Windows Touch Deep Dive. However note that this talks at low level details and raw windows API and programming and isn't really talking about managed programming.

Hope you will find this interesting and would like to get started with touch interface is not already doing that. I would like to hear from you in terms of would you consider touch in your applications and what kind of applications would they be?

November 26, 2009

Death by Silverlight

Yes, this title is influenced by death by chocolate, where in you get an overdose of chocolate. At this time I feel exactly the same for Silverlight (SL). With just over 2 years since the first version made its mark felt, Silverlight has come a long long way. Ironically, as part of TechDays event hosted specifically at our campus, we talked a lot about SL3 and right then, across the ocean, at PDC 2009, Microsoft just unveiled SL4 beta bits.

When we started looking at SL 1.0 back in late 2007, it had limited feature set with XAML support, and most work had to be done in java script. It looked more of media (video) playback at that time. MS called it their RIA platform, but it didn't offer much at that time. With SL2 at PDC 2008 and SL3 just earlier this year in July 2009, a host of features have found their way in the platform. Along with multitude of controls, to .net language support, to IIS Smooth streaming, to perspective 3D, to out of browser experience, SL is a technology you just cannot ignore.

It was very interesting to note that since keeping the size of the plugin small was critical part of SL implementation, MS actually ended up reducing the size of the SL 3 plugin over its predecessor, SL2. I haven't yet checked on the size for SL4 and given that it is still in beta it also won't make sense to do it right now. Anyway, so MS announced the public beta of SL 4 at PDC 2009 and the new developer tool set is available here. Interestingly, SL4 beta will work only with VS 2010 Beta 2 for now.

You can check this paper to get overview on what are the new features in SL4 and as well this session by Karen Corby at PDC 09. Here i will also give a quick snapshot of features I found interesting. These aren't necessarily on the only new features and may also not fall high in your own priority listing.

Few SL4 Features

  1. Print support is finally here. This is one requirement that we were asked a lot about
  2. Media support now also includes ability to capture devices like webcams and stream video as recorded by it
  3. More support for writing business apps with things like clip board access, rich text box support, bi-directional support to handle languages like Arabic, NGEN for platform assemblies to provide faster load times
  4. Styles can now be implicitly applied to all same type controls, just like WPF and you don't need to always use a Key
  5. Drag and drop from desktop onto SL application is supported
  6. Right click menu can now be customized

As I said, there are many more features to keep a watch on, but these are just high level ones that caught my attention. There are many more videos worth watching from PDC, in case you didn't get to attend the event in person.

Finally, When will RTM for SL4 be available? In his keynote address on Day 2 of PDC 2009, Scott Guthrie had said that final release will ship in the first half of next year. My personal take on this will be along with VS 2010 in March 2010 at MIX 2010 [Disclaimer: This purely a personal opinion and not an official statement].

November 24, 2009

Win 7 - Manage Default Printers

Some time back i had written about the new feature in Win 7 where by you could add multiple default printers based on your network location. See here. A recent comment on it made me revisit the option, so that I could point out from where to set this. I was taken aback a bit when I could not easily locate this option.

I knew since this was printer related it had to be somewhere in the "Devices and Printers" dialog but opening it up, I could not see this option. I tried directly searching from the start menu but no luck. Right click on a printer only allowed to set it as default printer.

However while trying that I could suddenly see the option "Manage default printers" on the blue bar in the "Devices and Printer" dialog. That is when I realized that this option is visible only when you select a printer in the list of printers already available. See the picture below that shows this menu option.


Not seeing the option when I don't have any printer installed makes sense, but having to select one in the list before I could set the default printers as per locations is a bit surprising. Later I found more details around this option here.

November 20, 2009

MYOC - Update Twitter Status

In this blog we’ll see in details how sending a tweet from a particular twitter account programmatically works on Azure.

1. Take twitter ID and password as input from user, encrypt it and generate twitter authorization key from it. This key will be used to authorize a twitter user when calling the service to send a tweet.
string authorizationKey = Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes(
                              twitterUserId + ":" +

2. Create the status message to be sent on the twitter account
string messageToSend = "Hi";

// the message has to be enclosed in between opening <status> and closing </status> tags
// if the message length is greater than 140, the message will be trimmed to 140 characters
// (including white spaces)
string status = "<status>" + messageToSend + "</status>";

3. Use the below mentioned twitter service URI for request
// this is the URI of twitter service to update status of a twitter account
Uri serviceUri = new Uri(@"");

4. Set the Expect100Continue property as false before sending the request to twitter service. Read the post to know why is it required.
System.Net.ServicePointManager.Expect100Continue = false;

5. Now create the request and add parameters to the request. The request body is encoded using UTF 8 encoding. There are saveral other methods to encode which provide encryption too. Please check this link for more details.
WebHeaderCollection header = new WebHeaderCollection();

// add twitter authorization key to request header
header.Add("Authorization", "Basic " + twitterAuthorizationKey);

// create request and add header
HttpWebRequest request = WebRequest.Create(serviceUri) as HttpWebRequest;
request.Headers = header;

// encode the request body
byte[] requestBodyBytes = Encoding.UTF8.GetBytes(status);

//add request parameters
request.ContentLength = requestBodyBytes.Length;
request.ContentType = "application/xml";
request.Method = "POST";

//create request stream to send to the twitter service
Stream requestStream = request.GetRequestStream();
      requestStream.Write(requestBodyBytes, 0, requestBodyBytes.Length);
6. Post the request to the twitter service URI and get the response
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
      Stream stream = response.GetResponseStream();
using (StreamReader reader = new StreamReader(stream))
             string contents = reader.ReadToEnd();

Following the above approach, one can send a tweet on his/her twitter account, which his/her followers can see in their updates.

MYOC - Telephony with Twilio to Vote

MYOC (Make Your Opinion Count) – an online poll application hosted on Microsoft Azure, uses Twilio to make it easier for people to participate in the online polls. Twilio is telephony in the cloud which exposes RESTful APIs to build scalable voice applications. It supports both inbound and outbound telephony calls. Pricing is developer friendly with pay-as-you-go model.

MYOC uses Twilio in two ways –
1. Poll creator can place a call  for participant to caste his/her vote
2. A participant can dial-in for a particular poll to caste his/her vote

Let’s see what all it takes to use twilio in MYOC to call a participant and accept his/her vote or to handle an incoming call to cast a vote.

Twilio Account Setup

1. The first and foremost thing is to have a twilio account. To play around, you can register for a free account which gives $30 credit. Open the link, provide your details and click on ‘TRY TWILIO’ button.

2. This will create a user account on twilio and show account dashboard. It will display following things –
a. API Credentials – which has account Sid and auth token to be used while calling twilio REST APIs from application

b. Trial Sandbox Details

c. Account Balance – it will show current available account balance

Getting Ready
1. Download twilio documentation from to know how to use twilio in an application.

2. Download C# helper libraries for twilio from and add twiliorest.cs in the webrole project.

3. Add the parameters in csdef file as below -



  <!-- twilio account SID -->

  <Setting name="Acct_SID"/>

  <!-- twilio auth token -->

  <Setting name="Auth_Token"/>

  <!-- API version of twilio rest -->

  <Setting name="Api_Version"/>

  <!-- a US number authorized by twilio used to place outbound calls -->

  <Setting name="Caller_ID" />


4. Add the values of above defined settings in cscfg file as below -



  <Setting name="Acct_SID" value="**********************************"/>

  <Setting name="Auth_Token" value="********************************"/>

  <Setting name="Api_Version" value="2008-08-01"/>

  <Setting name="Caller_ID" value="XXX-XXX-XXXX"/>


Placing an outbound call on a given number

1. Read values form configuration file

string Acct_SID = RoleManager.GetConfigurationSetting("Acct_SID");

string Auth_Token = RoleManager.GetConfigurationSetting("Auth_Token");

string Api_Version = RoleManager.GetConfigurationSetting("Api_Version");

string Caller_Id = RoleManager.GetConfigurationSetting("Caller_ID");

2. Initialize call process by creating twilio account instance with account SID & auth token and creating the service URL to be called once participant receives the call.

//Initate the call

TwilioRest.Account twilioacc = new TwilioRest.Account(Acct_SID, Auth_Token);


// create the URL to post at Twilio to place a call

string postUrl = "/{0}/Accounts/{1}/Calls";

postUrl = string.Format(postUrl, Api_Version, Acct_SID);


// create the service URL to be called when participant vote over the call

HttpContext current = HttpContext.Current;

pollURL = "http://" + current.Request.Url.DnsSafeHost +


pollURL = string.Format(pollURL, pollId, userId);

3. Place the call on specified contact number from caller ID.

Hashtable vars = new Hashtable(3);

vars.Add("Caller", Caller_Id);

vars.Add("Called", contactNumber);

vars.Add("Url", pollURL);

string response = twilioacc.request(postUrl, "POST", vars);

4. The service invoked with the pollURL will handle the user response over call. User can provide the dialogues to use in the call and what action to take according to user input using Twilio Markup Language(TwiML). TwiML contains tags to define various activities like say a text, pause for defined time, get user input, play a recorded file, record the voice, call another service etc. Please refer to the TwiML documentation available at to get more details. The service to handle the call will be as shown below -

HttpContext current = HttpContext.Current;

string welcomeMsg = "Welcome to the Poll created by " + pollToDial.CreatedBy + ".";

// If poll is not open at that time

If (!pollToDial.IsOpen)


    statusMessage = "Unfortunately the Poll is not available at this time. Please try again for another Poll.";     

    current.Response.Write("<Response><Say voice='woman'>" + welcomeMsg + "</Say><Pause length='1'/><Say voice='woman'>" + statusMessage + "</Say></Response>");




// Present poll details to user like poll name, poll question and poll choices, then prompt participant to submit his/her vote for any of the given choice


// poll will be submitted using below URL             

string action = @"http://" + current.Request.Url.DnsSafeHost + "/MyPollService/Poll/Call/Submit/" + pollName + "/" + pollSubmitter;


// get the selected choice and submit by calling poll submit service URL

string responseOfPoll = @"<Response><Say voice='woman'>" + welcomeMsg + "</Say><Gather method='Post' numDigits='2' action='" +

action + "' finishOnKey='#'>" + respond + @"</Gather></Response>";




Handling an Inbound Call

1. Map the ‘Uses URL’ of the twilio account with the application service URL and save. When user dials in the twilio number, this service will be invoked.

2. When participant logs in MYOC to vote for a poll, he/she will be given the dial-in number with PIN number and the poll ID. Participant will call the given number and enter the PIN, then he/she will be redirected to the service specified in Uses URL of twilio account.

3. The service will be similar to the service used in placing an outbound call using twilio. The only difference will be that participant will be prompted for entering the poll ID for which he/she wishes to vote.

4. The service to handle the call will be as shown below –

// get the poll ID entered by user on call

if (!string.IsNullOrEmpty(current.Request.Form["Digits"]))


    pollId = current.Request.Form["Digits"];



// if no poll ID has been entered by participant

if (string.IsNullOrEmpty(pollId))


    string actionSelf = current.Request.Url.ToString().Replace(":" + current.Request.Url.Port.ToString(), "");

    string introMsg =

    "Please enter the numeric digits of the Poll you wish to participate by pressing the keys on your touch tone phone.";


    current.Response.Write(@"<Response><Say voice='woman'>Welcome.</Say><Gather method = 'Post' numDigits='4' action ='" +

          actionSelf + "'><Say voice='woman'>" + introMsg + @"</Say></Gather></Response>");



5. If the participant has entered a valid poll ID then present the poll choices to the user and get his/her response and submit his vote as explained in the step 4 of above section ‘Placing an outbound call on a given number’.

November 18, 2009

MYOC - Make Your Opinion Cloud- Series 2

Following my previous blog, on the MYOC system requirements, here I shall be covering the solution design of MYOC.

Listed below are the basic design considerations identified for the application:

  1. Application must be accessible from any device and any location

  2. Secure access to application resources using a federated identity and allowing users to reuse their existing credentials which they may have with other Identity providers such as Live, AD, OpenId etc.

  3. Application should support multi-tenancy and have elastic scale to meet fluctuating workload demands

MYOC is a cloud application build on the Windows Azure platform. The above design considerations are satisfied by the Azure services platform and the .Net framework. Here is a pictorial representation of the MYOC solution:



MYOC Solution Architecture

At the core lies MYOC build on the Windows Azure platform, to provide poll functionality to a varied set of users ranging from the retail consumer to the Enterprise User. Polls can be made accessible over different devices such as PC, laptop, mobile, telephone etc.  At the same time the poll presented is context aware, with respect to the device on which the poll is being presented. Polls will be presented in a manner that is more suited to the device used by the user. For e.g., from a laptop or PC, the user will be presented a poll on SilverLight; from mobile, the Poll will be presented in plain HTML; if the user accesses the poll from a Telephone, the poll would be presented over IVR.

The earlier stated design considerations in this blog, have been addressed by identifying the following key areas:

Service Oriented Architecture

The application has been build on the principles of SOA, with access to the application functionality provided by REST based services over HTTP with the goals of having them to

  1. Be easily consumed by any platform be it .NET, Java, mobile apps, widgets or any other which exists based on open standards

  2. Be easily extendable to support additional devices for the future

  3. Provide a development platform for other users to build poll application on MYOC

Having been build on top of Windows Azure, the ubiquity of this platform allows MYOC functionality to be accessible by any standard http-aware devices from any location which has internet access. Using the SOA design model achieved using REST we were able to provide users and developers with unified access on the MYOC platform across multiple delivery channels.



  MYOC Service Oriented Architecture Design

To read more on the way REST interfaces are implemented in MYOC, go here .

Federated access

MYOC has been designed to be extensible so as to support multiple identity providers such as Live, Open Id, Enterprise Ads, Custom Forms Authentication etc..The app can handle authentication and access control across these multiple identities from a single place. And this has been possible as the Authentication and Access control in MYOC has been managed separately on the cloud. This separation has been primarily achieved by using the Windows Identity Foundation (previously known as Geneva) and the .NET Service bus Access Control Service (ACS). The federation model allows users possessing separate identities to access this application in the same manner as they would experience with other apps supporting these identities. 

Source: Azure August 2009 Training Kit


MYOC Access Control Design

From the above figure depicting a pictorial representation of the working model of ACS in which the user access rules is not maintained in the application code or some privately owned application repository. It will be maintained in a common repository of the Access Control Service in the form of claims. Every user (Requestor) accessing MYOC (Relying Party), once authenticated by their respective Identity providers will be provided a claim by ACS. Within the application, these claims are used by MYOC to decide the access privileges of any user. Separating the access control rules now allows architects to re-use claims across multiple applications and thus centrally managing and maintaining this repository.
Also access to the MYOC functionality is secured by the .NET Service Bus and the Windows Identity foundation. A detailed explanation on the federation process used in MYOC is available here.

SaaS Enabled

In addition to being Service oriented, a SaaS enabled application also needs to be multi-tenant and highly configurable. In MYOC, the current version, has been enabled to support multi-tenancy, leaving out configurability for a future version of the app. The various multi-tenant deployments models that can be supported by the application are represented below.

MYOC multi-tenancy models supported 

Multi-tenancy in MYOC is configured at both an application instance level and the data storage level.

Application Instance Multi-tenancy

Application instance level multi-tenancy is directly enabled and provided by the virtualized platform of Azure. Avoiding any user specific implementation in my code, I can deploy an application in either a shared model, by using the same web role instances for every user or in an isolated manner, by provisioning a new hosted instance for every customer who wishes an isolated model. 

Data storage Multi-tenancy

Multi-tenancy at the data level is what would need to be explicitly handled in the application design. The access and isolation of the customer data is configured based on the domain a particular user identity belongs. Two multi-tenancy schemas are possible in the storage aspects of the application - Shared data and Isolated data

Shared Data multi-tenancy model

In this model, the poll data created by the users will be stored in a common storage account and the data of a particular user is partitioned by their respective unique user id’s thus bringing in data isolation in a shared model. The application, based on the users identity, passes the data access request to the common storage. The application then accesses the data based on the users id which is the partition key identified,
The shared data schema is more suited for retail consumers, the likes of Live , Yahoo etc.., users, not warranting the need for any dedicated data isolation or meet any specific data security compliances. This model makes more efficient usage of the storage account as compared to the following isolated model but is perceived as less secure model to eliminate data privacy concerns

Isolated Data multi-tenancy model

In this model, the polling data will be persisted in a separate storage account and not the one used in the shared data model. Enterprises that are generally privy about their data prefer this model, wherein they get complete data isolation of the poll created by the enterprise users.
In this case the application, based on the users domain, route the data access request to a separate configured storage account. The separate account is mapped to the domain from which a user logs in. This is managed at the application level with the logic embedded in code. The model provides a more secure approach to reduce the data privacy concerns but at the same time lower storage utilization efficiency.

In the next blog I shall cover the technical architecture of the application

November 17, 2009

Defining solution in ERP: Ten Commandments - Thy shall not falter on them!

Life is never too simple! And who can know this better than consultants. Consultants have their own reasons to follow an approach for achieving a business need using any COTS solutions. However, in my personal experience I feel that any packages product should be used with utmost care and be modified after following some standard principles. There would be lot of pressure from some members of the client team to customize the product so that it has look and feel of the previous application and which lead to reduction of training needs. On the other hand, the IT teams would like to keep it simple to reduce the upgrade costs and the pains associated with it along with recurring cost of maintenance. The aim is to reduce the overall TCO (Total Cost of Ownership) and the APM(Associated Pain of Management). Well the second acronym is my brainchild. Smile

Of course, there are several factors which do affect the reason to choose the path of customizations. I had some thoughts around this which is available in one of my older blogs entry The “framework” conundrum in ERP/CRM

If we keep these factors on the side; I think there are some basic principles which should be discussed with the client beforehand and whetted before the solution development can happen. This is important to set the broad guidelines for the long match which is going to follow between consultants, business users, IT users and the CXO community at large. Once the base is defined; I have found it easier to discuss the greater details around the solution-design with the stakeholders. Any aberration from the basic agreed upon principals can be “striked-off” by simply referring back to the ‘Commandment’ being violated!

Here is the list of ‘Commandment’ that I personally follow to start with. Depending on client situation this can get modified; but I can assure you that it works well in most of the cases with the clients sometimes coming back to them after some initial modifications when they understand the ramification of not following them in spirit.

So, here goes my personal list (which I call TRY TO GRASP) –


  1. TRY - The product should be used out of box as much as possible.
  2. RECOGNIZE - The entity structure and relationships as defined in the product should be modified to the minimum.
  3. YEARN - ‘Discuss’, ‘Seek Advise’ & ‘Search’ for workaround without compromising on point 1 & 2.
  4. TOTAL VIEW - Look at the overall process flow (as needed by Client’s Business) in ERP before finalizing the elements / entities. Do not think at ‘module’ level!
  5. OBSERVE - Just using an entity (because it sounds similar to client needs) is not the right way; there can be hidden issues due to product data model.
  6. GRIP - Do not try and compromise on the ‘Financial Structure’ in the product; the solution should not interfere ‘with’ it and should be ‘around’ it
  7. RESEARCH - Look for ‘certified’ solutions available from partners in areas which are gap in the product.
  8. ADAPT - If nothing work – ‘Customize’. Customizations are a ‘necessary evil’.
  9. SUSPECTS – Be aware of the usual suspects. Never count the number when seeing customizations; it’s the dimensions of ‘complexity’, ‘risk’ & ‘impact’ that count.
  10. PEEP - Do not jump on to customizing ‘white spaces’; it could be in the roadmap of the product!

I am sure there are many other things that we follow without formally writing them down in a structure. Please share your thoughts. Are there any additional “Commandments” that you follow?

Deploying non-microsoft applications on Azure

In my previous blog post, I had shown you a deployment model in which an application could leverage the capabilities of both on-premise as well as being on cloud. In that blog I had shown how application storage had been migrated from on-premise to the cloud.
In this post I will show you yet another deployment model possible on Azure and which may interest most of you who may have applications running on non-Microsoft technologies. I will discuss a model on how applications build on Java technologies can be deployed on Windows Azure and hence can reap the benefits of Cloud Computing.

More on "Infosys mConnect" in a Microsoft Paper is available here

With the November release of the Azure SDK, Microsoft has opened up the Windows Azure platform to applications build on non-Microsoft technologies such as J2EE. Azure can now host any application which runs on open source platforms such as Apache Tomcat, MYSql etc. In this Blog post I will walk you through one such scenario where we have migrated one of our existing non-microsoft application on to the Cloud.


Infosys mConnect is a “context aware” & “device agnostic” platform that helps to mobile enable any existing application or create one from scratch. Infosys offers mConnect, as  a product that enables Web sites, e-commerce, and banking platforms to support mobile devices without costly modifications to their services. The core differentiation of the platform lies in the way it extends traditional application functionality to any device such as a Palm, Blackberry, iPhone etc... in a way that is optimized for that end user device and the network.


With mobile becoming one of the de-facto delivery channels in reaching out to customers or partners for most Enterprises today; scalability and performance are becoming the key challenges faced. With the wide scale penetration of mobile devices, B2C scenarios are seeing huge demands from the growing mobile user community in terms of being able to deliver rich content in real time. Infrastructure build for catering to mobility scenarios will have to stand up and be resilient to meet the growing load and scalability demands of the mobile community. And building such an infrastructure will require significant investments to be made.
While  Infosys mConnect being a mobility platform addressing issues like device diversity and optimizing application response as per different mobile devices but with porting of mConnect applications to Azure, we could very aptly address the scalability and performance concerns. In the traditional model, mconnect was deployed within Client locations or sometime within our own Data centers. The challenges here had been with having to over-provision and at the same time manage these systems thus incurring significant upfront capital investments. With mConnect on Azure we are now able to have an offering to deal with the huge appetite of the B2C mobility service for our clients and that too with low upfront investments


Infosys mConnect is a gateway which sits between the end-user & the enterprise. The gateway, services requests from the end user which it can receive from any of the different mobile devices out there today. The request received is forwarded to the enterprise systems in the backend to retrieve the required information. Information received from the enterprise systems is then encoded in the relevant protocols into screens supported by the mobile device. The gateway based on the device info is able to gather the device context and renders screens which are suitable to the device.

mConnect Architecture 

Infosys mConnect has been built on J2EE technologies and hosted on Apache Tomcat. Also the application can be configured to work with any relational databases data storage. 
With the latest Nov SDK support for “External endpoints for worker roles and access to role configurations“ is made available on Azure. Using these features we were able to move the mConnect technology stack onto Azure AS-IS and that too in the least invasive way thus saving us considerable time and effort in the process. We were able to deploy the entire application on to Azure in 3 weeks.


mConnect deployed on Azure 

The mConnect application was deployed to Azure with SQL Azure as the Database. The application start up is handled by the Azure worker role. Before strarting up a Tomcat instance, the worker role will first map the tomcat listening port to the worker roles endpoint such that all request arriving at the worker role endpoint will be directed to Tomcat. Once the mapping has been done the worker role will fire up a tomcat instance which will inturn initialize the mConnect application.

The implementation details of running a Java app on Azure can be easily understood by downloading the Infosys developed, Tomcat solution accelerator on Azure which is available here

Benefits of Moving to Windows Azure:

1. The instant benefits we got from Moving to Azure was ability to gain instant scale without investing a single dime on Infrastructure
2. Also by using SQL Azure as the data storage we were instantaneously able to leverage relational data access capabilities on cloud with changing a single line of code
3. By adopting to a least-invasive approach to application migration, we were able to migrate a Java application on to Windows Azure in a matter of weeks and thus reducing the overall time to market

Windows 7 - Appropriate roadmap for Adoption

This has been the most interesting topic of discussion with CIO's and IT directors while we talk about Windows 7 adoption. The debate here is should this be approached as a vanilla OS upgrade or view this as a standard operating environment transformation. Based on my experience in working with multiple customers on Windows XP and Windows Vista upgrades the value of upgrade is multifold if the approach covers the upgrade of the surround ie the core infra components. While planning for the Windows 7 a health check of the surround components like the applications, image deployment & distribution framework, patch and image management lifecycle and IT security process can help in understanding any gaps in the existing process and corrective measures can be applied along with the Windows 7 upgrade. While this view looks to be a time consuming process this helps in clearing of the legacy from the system while transforming to a faster, secure and efficient operating system Windows 7.


November 13, 2009

Infosys on-boarding ISV's on the cloud - 1

Infosys on its part is helping Enterprises and ISV’s to adopt cloud.

Adopting this new paradigm of cloud computing; newer and innovative styles of using the cloud platforms will have to be explored. Here I shall walk you through one such case which demonstrates how we’ve helped one of our ISV customers, Volantis ,  to adopt cloud.  The detailed case study on this project done by Infosys is available here

Volantis is a developer of innovative solutions for mobile carriers. The company’s software makes it simple for users to point and click to create custom Web sites that are optimized for mobile is a free online service that allows small businesses and consumers to quickly build a mobile Internet site, without having to write a single line of code.

To cater the applications non-functional requirements of being highly available and achieve global class scalability, which would help meet the demands a rapidly growing user base, Infosys helped Volantis to offload Ubik’s data storage on to the Microsoft Azure platform.

Volantis was looking at migrating some of their existing applications functionality to the cloud and gain the benefits of a cloud deployment to meet the high demanding non-functional requirements of the application with near-zero investments. Infosys helped Volantis identify the scenario that could demonstrate the benefits of Windows Azure cloud. The scenario identified was in addressing the scalability demands of the storage requirements in the application.

Challenges & Goals:
The main challenge faced by Volantis was to store large amounts of user created site content & its associated metadata, being as large as over 30 terabytes. And that too without having to actually invest in setting up such an infrastructure upfront from their own working capital.  Volantis desired that the storage be made capable to scale seamlessly and meet the storage demands of  their mobile users community.

Proposed Solution:
In the process Infosys proposed a phased approach for migrating to Azure. In the first phase, and which I would be touching upon here, the migration of the file storage to Azure BLOB storage was proposed as it was a layer with minimal change and the risk associated with the change was low. With this approach the benefits of the cloud could be easily realized in a short span of time.
Our solution proposed architecting a separate RESTful services layer on top of the Blob Storage that would provide a seamless and scalable access to the application. Exposing RESTful service wrappers on the Azure operating system was essential so as to have blob storage accessible from Ubik’s non-microsoft application APIs.
A separate service layer was build to minimize the changes in using Azure Storage with Site content uploaded by the users into storage comprised of images and XDIME files. These files before uploading into the storage had to be parsed and then appropriately persisted. The newly created services layer on Azure handled this and thus avoiding making any significant changes to the existing application codebase.

Migration Approach:
The migration of Ubik’s data storage from on-premise to the cloud was done as shown in the figure below:


Team Dynamics:
Both the Infosys and Volantis developers worked as a team to identify areas in code which could have a possible impact owing to the change. The team also identified service interfaces which would be required to be exposed from the Windows Azure end to make the storage directly accessible on the Ubik’s non-Microsoft platform. The requirements and design was done collaboratively between the Infosys & Volantis teams working across different timezones. Using basecamp as the project management and collaboration tool helped to better cordinate the project across different time zones and deliver the project in a short span of time.
Once this initial design completed, both the teams got about working on their respective areas of the application. The Volantis team made changes in the Ubik application code and the Infosys team focused on building the RESTful services on the Azure operating system.  On code completion, the teams got together to test the services and the integration touch points. The entire project was completed within four weeks. 
Here I have shown a typical working model by which we engage with customers to help adopt cloud. With Volantis, Infosys not only assisted in identifying a cloud scenario but also helped in migrate an existing application to the Azure cloud.
Continuing on my Volantis experience, in my next blog, I would describe the technical architecture of the application on Azure.

SharePoint 2010 and Branding

MOSS 2007 framework came with enough built in capabilities that could be exploited to make a SharePoint site look and feel unlike the typical out-of-the-box SharePoint site. Unfortunately, too often, developers either lacked the training or exposure to simple techniques that could be used to ‘brand’ their sites or there was not enough business justification to spend time working on that.

With upcoming SharePoint 2010 and related suite of products, the focus is going to expand from intranet and collaboration oriented internal sites to developing external facing web artifacts.  This was clearly highlighted by Steve Ballmer himself during the launch of SharePoint 2010 earlier last month. This will require higher level of attention being paid to branding aspects of SharePoint.

Over the next several weeks, I plan to share my insights into how SharePoint 2010 facilitates easier branding and building custom themes or skins. Idea is to understand how users can still tap into every built-in capability of this powerful platform without compromising on the design and branding aspects. Branding imperatives like establishing corporate identity and ownership, reinforcing the enterprise standards and creating a sense of place/connection with intangible brand values are key in making external facing artifacts successful and we will see how the new platform supports that

For this post, I want to share couple of thoughts about an ideal ‘end state’ I would like to see products like SharePoint reach when it comes to supporting branding work.

Reduce the burden on Visual Designer

Branding is a highly evolved and specialized area and visual aspects are just one part of that. It takes a while for a good visual designer to fully internalize the interplay between the tangible, visual aspects of branding and how design can be used towards communicating what the brand stands for.  Even the visual aspects are multi-faceted and logo (which tends to be equated with branding)  is just one - though important - part of branding work.  The role of visual designer critical in any branding effort. This is true of web sites, stands alone or developed using platform technologies like SharePoint.

With so much riding on visual designer already, I think the complexities of understanding what SharePoint permits and where it falls short (there are several things about master pages, layouts and dynamic nature of web part placements that I can think of in this category) place an undue burden on the visual designer.  Good designers are very mindful of constraints.  Rather, they distinguish themselves by their ability to internalize complex overlays of conflicting ‘constraints’. But if these constraints are relaxed it will give the designers more freedom and space to design the best solution for the problem at hand.

Allow developers to leverage commercially available Visual Design artifacts

Most SharePoint sites are built by development teams that do not have any one with formal visual or interaction design training. This is likely to continue in future as well as because SharePoint will still be looked as a ‘technology’ platform. In this situation, it is important that platforms like SharePoint come with built in templates, tools or other enablement capabilities that allow a developer to deliver a far better looking interface artifact than their native skill for doing so. Microsoft has been doing a good job on this front across their Office products since Office 2007 and my early dabbling with SharePoint 2010 shows that they have delivered some positive news on this front. One area that I am curious about and am personally investigating is how development community can tap into the ‘commoditized’ visual design artifacts available on the web – like icons designs, visual design templates and themes -  to quickly build high impact solutions .


November 11, 2009

Windows 7 to deliver on its promises

Probably I should have written on the Sept 29 itself when three great products Windows 7, Windows 2008 R2 and Exchange 2010 were launched by Steve Ballmer at SFO with live coverage from 6 other cities in US. However, even though we were Platinum sponsor for the launch; I wanted to wait till the last event at Berlin. This was to ascertain that it is not euphoria around the new product launch, and we do spend time to do a reality check with customers, analysts and even competition around the globe besides of course our long time association with these technologies in different capacities with Microsoft as well as our clients. I must say that since Sept 29th at SFO till Berlin on Nov 10th, it was absolutely delight to know that the products have got raving reviews across the cross-section of industries, geographies and different market segments. The launch at SFO was just superb with the amazing energy brought in by Robert Youngjohns and Steve Ballmer during their opening sessions, panel discussions and the question answer sessions. Some of the leaders from the early adopters of the Microsoft technologies viz. Intel, Ford, Continental Airlines and Star Wood Hotel shared their views and how their organization see the benefits of adopting these technologies. The paradigm of ‘New Efficiency’ – the terminology used by Microsoft is bound to drive the organizations to improve productivity and innovation by getting more out of organization’s resources and creating new revenue streams by unlocking creativity. All this while the new technologies from Microsoft help pulling cost out of operations i.e. with less doing more. The demo at the event showed the robustness of the three products and how they help improve the productivity in the organization. Some of the features of Windows 7 besides the cool UI, simpler, easier & faster operation, higher responsiveness; seamless access to corporate resources in trusted and secure way without the VPN are amazing from the perspective of mobile workforce. Federated search, security features like bit locker, policy compliance tools, secured browsing with IE8 and PC management leveraging application & desktop virtualization with higher level of control and automation provide the perfect recipe for the organization to bring in new efficiencies.


The preparation of business case and demonstrating the ROI to business and IT for migration to new OS platform had been always a challenge and to address this the stakeholders be it business, IT, Microsoft, vendor for migration and users have to work in tandem. The proposition of Windows 7 for saving $90- $160 per PC per annum (based on the study and assessment from Microsoft) is very compelling. The actual benefit would vary for different organizations based on multiple factors like complexity, application environment, previous investments in Vista, infrastructure optimization maturity, etc. However one thing is sure that there are benefits which organizations must make take advantage of and embark on Windows 7 migration roadmap sooner than later to gain new normal faster than the competition.

November 3, 2009

Part 4: Which Presentation Tool to Use?

I personally like to use the latest technologies for doing presentations and not stick to power point deck all the time. I was first impressed by the Mix 08 session by Arturo Toledo, where he had used Expression Design to build his slides and then used Deep Zoom to actually show it. In a presentation that I had to make for an internal session I used the same technique and it was an instant hit. Many people came over later on and wanted to get my presentation tool.

A little later while doing a Silverlight 2 training, I realized that I would have to go back and forth between my slide deck and visual studio and expression blend and to me that seemed non-productive. Hence I ended up building the entire presentation as a Silverlight application and I integrated all demos in the same application. So it was all the time IE in which I ran the application. I had even added a slight animation in a corner to run for 1 hour and that gave me and the audience a good idea of elapsed time and the time remaining. Obviously I had to ensure that I ended on time for this to look good.

Needless to say that when recently I started working with Sketchflow, it made sense to do the presentation using content built in Sketchflow itself and that’s what I did. An interesting learning on visual state manager (that I had earlier written about here) was the fact that states in a state group are mutually exclusive, and that the application/control can exist in multiple states at the same time if they aren’t in the same group.

For the presentation I was making, I had a scenario where I was animating some content and when it would stop I had to trigger animation of some more content. This isn’t possibly on it own with Sketchflow animation as the animation will play from start to end and any new animation will start from the base state. The same is true with states in a state group. Finally I realized that I could actually create many different state groups and add my required animations as states inside of different groups. Now when any new state animation will not start from base state, but from the state the application/control was in, at the end of previous state animation.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter