Infosys Microsoft Alliance and Solutions blog

« November 2009 | Main | January 2010 »

December 28, 2009

VisualStateManager's Benefits

Many months back I had written about VisualStateManager (VSM) feature in Silverlight. Overtime, MS has been working on streamlining this and new additions are available with Blend 3 to support VSM. VSM got introduced in SL, as some say, mostly to address the lack of triggers, due to which creating control templates was a big issue. Eventually WPF 4 will also start to support VSM. There have been many interesting debates on this, which you can find here, here, here and here, but this parts and states model is here to stay.

In case you haven't spent time as yet checking this out, you can find a good intro in the 4 part series by Karen Corby, starting here. Another interesting blog is by Ian Griffiths. Christian Schormann captures the goals for VSM here.

Personally I find VSM useful and in a recent conversation, we were discussing the benefits of VSM. I thought I will capture a few here. 

1. VSM allows logical grouping of states. Looking at the state groups and states within, it is easy to figure out which states are orthogonal and which don't go hand in hand.

2. The capability of control to exist in mutliple states from different VSM state groups is interesting and as it is internally managed, the control writer doesn't has to worry much about this, apart from invoking GoTo actions to transition to specific states.

3. VSM model can not only be applied at custom control but at a page level as well and page states managed via it. I had shown this in my earlier blog.

4. The best part of VSM, i feel is the ability to deactivate a state when animating to another state in the same state group. VSM not only deactivates, but actually plays a suitable reverse animation. Ian explains this in his blog in more detail. I tried to dig deeper into the framework code to see how this is implemented but hit a roadblock as the real implementation is in agcore.dll and that being a native DLL i could not use reflector with it.

You are welcome to add to this list.

December 21, 2009

MMIX > MMX - The King is dead; long live the King (Year end musing on IT providers space)

This is probably my last post in this year and I feel like sharing my thoughts together at one go.

Yet another year is about to come to an end. Many would like to forget this year as one of the nightmare - job cuts, stagnation, closing businesses, struggling bottom lines, insecurity and unexpected falls. While several things may have gone wrong; there were some very thought through and meticulous business moves that will change the business paradigm in near future.

This year also brought a sense of ‘sensibility’ and need for ‘introspection’ at the enterprise level. The unrealistic targets and greed were corrected to pave way for players who had ‘invested’ in the enterprise and looked at this downturn as a means to think and correct the anomalies such that the end customers, vendors, partners, stakeholders and employees at large are able to come out of this with an aplomb. Businesses have realized the need to collaborate, rethink on business models, improve on delivery models and do “more for less” so that we all can benefit as an organization ecosystem.

The margins have dipped but the ‘continuity’ and ‘relationships’ have improved for those who kept away from ‘transactional’ mindset. This is a long term investment and will surely reap benefits! The talks about ‘green-shoots’ and ‘recovery’ were being heard over the later part of the year and that sure lays ground for a ‘brighter’ future ahead.

Several participants in this space have commented on the future challenges and opportunities for service providers in IT space. While I like some, I hate some. This is my personal list around these, though not necessarily in any order–

1.       One-stop-shops are needed – A service provider who can provide infrastructure, BPO services, liaising, approvals, research and consulting all together so that the enterprise do not have to go out in search of multiple partners. Platform solutions are another area which need lot of focus and pricing innovation.

2.       Business Transformation deals are a reality – There are businesses that have been running over years and have become a mish-mash of applications all making the organization ‘disconnected’. There is a need for service providers to enter into these ‘transformation’ deals and relate this back to real value for clients

3.       ‘Architecting the enterprise’ – are you game? – There are very few ‘Greenfield Implementations’ today. Thus, there is a need to identify opportunities to rationalize and streamline and in the process ensure that the entire organizational processes are tied together in a way that the business can perform what it is meant to do seamlessly.

4.       Going the cloud way – With so many providers taking the ‘cloud’ route; it makes sense to look at opportunities of moving out some of the non-critical applications on the cloud and leverage of advantages of OPEX over CAPEX.

5.       Boundary-less Organizations - new focus area – Today the business process originates internally goes out to customer / vendor where the processing is  on a separate platform and this then comes back into the parent organization. All these need to be ‘knitted’ together ‘loose’ enough to allow for flexibility and ‘tight’ enough to allow seamless integration of the processes.

6.       ‘Hybrids’ on the cards – With platforms like ‘Azure’ from Microsoft and the strategy of ERP products like Microsoft Dynamics to allow for a ‘terrestrial + cloud’ offering; the service providers need to look at options where the core ERP/CRM may be installed terrestrially (on-premise) while the peripheral systems can be on the cloud.

7.       ‘Nimble’ is in – today the need of the business is to adapt to changes and be flexible enough to actually model the business processes in such a way that these changes can be done seamlessly. The products need to be modeled and implemented in such a way that these flexibilities can be attained at an overall level using the system of connected product and service offerings.

8.       Micro-verticals focus needs more than mere lip-say – While this has been an oft repeated word; the investments in these areas are typically coming in from niche players working in one micro-vertical. There is need to invest in these areas to reap benefit in the long run. The service companies may have to think from the ‘product’ company perspective to achieve any gain in this area.

9.       Size does matter – With increasing experience of bigger Sis/ISVs and cost arbitrage margins becoming lower, it is the elements of ‘reusables’ and ‘accelators’ which can lead to effective reduction in cost. With huge size, there comes an advantage of attracting best talent and utilize them appropriately.

10.   What next after process optimization – Business Intelligence & Analytics? – The products have helped in optimizing the business and ensure that right data is available at right place at right time with right people; what is lacking is how to use this data for business benefit. Value Analytics would be the game changing paradigm and needed for service providers to move up the value chain

11.   Skilled minds are the real assets – Focus on ‘People’ would have to come back. Recession may have considered them as ‘overheads’; but these are the real drivers in the knowledge driven enterprise. Program Management, Enterprise Architecture and User Handling skills would be of utmost importance besides the traditional technical, domain and functional skill sets.

12.   Doing “more for less” would be the mantra – How to optimize on cost would be the constant question. The clients will become more demanding as they would need value out of each dollar invested in the initiatives. The service providers need to show the real value-add to the clients to win deals

13.   Going ‘non-linear’ beyond the established models – Several of now established models around ‘solution selling’, ‘Platforms’. ‘SaaS’ need to be evolved and new models generated to actually delink the revenue growth from headcount which has been more or less proportional ever since the birth of this industry

14.   ‘Developing’ markets hold the key for future growth – Traditional markets have matured and the investments from clients are around maintenance, upgrade and enhancements. While this sure provides some assured stream of revenue; it is the new and emerging markets in pockets around South East Asia, Middle East, South America, South Africa and Eastern Europe which hold a lot of potential for Greenfield implementations

15.   ‘White spaces’ are available for SI/ISV to capture in new verticals & customer segments – Traditional verticals like BFSI, Retail are cluttered; there is a need to look at Communication, Entertainment, Education, Sports, Non-Profit Organization, etc. where there is a need for a lot of investment

16.   Social Media is here to stay – You may stay clear of Twitter, Facebook, YouTube, MySpace and likes; but the role of these media cannot be undermined in terms of understanding the client behaviors, marketing the products, receiving feedback, etc. and all this is going to become even more omnipresent as we see more players jumping on this bandwagon.

17.   Service Levels are needed; but, IP would be the differentiator – Creation of IP and Knowledge assets to provide a different value proposition to the clients would be in vogue and very much needed to lead the race. This will also include bringing about improvements in existing solutions/delivery methods.

18.   From off-shore to near-shore – With low cost locations available to bigger markets, visa issues, increasing travel costs, green initiatives, cultural differences, anti-outsourcing lobbies etc. there is a need to move some work to near-shore locations. This is a strategic move and needs to be implemented with more vigor.

19.   Mergers on the way – The service providers can actually go this route to increase presence in a region, enter into new verticals/domain or to complement the skill sets that might be missing in the parent organization to take up greater and global challenges.

20.   Innovations needed in pricing models – Fixed Price and Time & Material are passé. There is a need to go for Risk-Reward, Risk-Sharing, Transaction based and Usage based pricing. These are mentioned in ‘deck-wares’ and implemented in pockets and sometime retro-fitted to the implementations; but there is a real need to define these models and implement them to bring real value to clients.

I know this was a long one; but then this is what I feel and did not want to edit it to risk missing on my thoughts. Please feel free to add to this list. Season Greetings and have a happy new year 2010. May it bring a lot of happiness and prosperity to all!

X-factor in XRM

Many of the CRM solutions are tied towards relationship management but it is not essentially towards managing a customer. This is where XRM comes into play and X could be an employee, patient, Investor, Partner, or anything.  A typical CRM solution will have the lead to opportunity life cycle but in XRM it will vary depending on whom are you managing.  In short, XRM means extended Relationship management.

The CRM products that are flexible and allow the re-use of the below features will qualify as the XRM product. This is where Microsoft Dynamics CRM scores over its competitors.

1.       UI --> User interface is created by default in the dynamics framework. This can be easily modified and extended using .NET and Silverlight.

2.       Event-Driven --> Microsoft dynamics provides the handle for all the events that cab triggered though batch, integration or through user interface.

3.       Entity-based --> New entities can be created and the relationship can be established between these entities.

4.       Workflow --> User can manage the work flow through a user interface or a new work flow can also be published by the developer.

5.       Security --> Object-based and role-based security model provides the high flexibility to adopt the security requirements of any new processes and/or a new organization structure.

December 18, 2009

Dev 10 Release delayed

Yesterday MS announced a delay in the release of the upcoming Visual Studio (VS) 2010 (called Dev 10 in short). In both Somasegar's and Scott's blogs, the reason mentioned is addressing the memory and performance issues. While this definitely means that the end product would be better in these terms, what does this delay mean to you?

The good part is that the release candidate planned for Feb 2010 will have the "go live" license support, so any plans for production deployment may still be carried on. What is of more interest to me is how this impacts the Silverlight 4 release plans, since SL 4 as of now needs Dev 10?

December 14, 2009

MYOC - Offload compute intensive tasks on Azure using the Offline Processing pattern

In this post on the MYOC cloud development series, I will share an offline processing design pattern where certain computation tasks are offloaded to another execution task using queues and that can help reducing the overall processing time of online transactions. This is a very useful pattern to use, if you plan to build highly scalable and compute intensive application on the web today. This patterns is also used by many popular websites. Here I will demonstrate how we've used this pattern to help reduce the poll creation time.

In MYOC (Make Your Opinion Count), whenever a new poll is created,a user can send invite notifications to people for participating in the poll. These notifications can be sent through SMS, e-mail or Twitter.

As shown in the figure below, when a user creates a poll and invites his/her friends to participate in poll, the poll gets created and subsequently a notification may need to be send out. These notifcations are first send out as a Notification messages into a queue. The Notification message contains instructions on the mode of alerts which need to be sent out say SMS, e-mail or Twitter. A work role, the Notification processor, processes these messages asynchronoously in an offline fashion. It picks up the notification messages from the queue, process the message and extracts the channels to which the notifications have to be send. Once the notification channels are identified the processor calls the respective notifcations services along with the information required by the service. The poll creation transactions completes without waiting for completion of sending invite notifications to invitees and significantly reducing the overall time of the transaction improving the application throughput.

As of now MYOC supports three types of invite notifications to participants i.e. e-mail, SMS and tweet. Other notification services can also be easily incorporated by setting a filter for them in the notification processor and passing the message to the services.

The overall process of sending the notification comprises of three steps –
1. Putting notification messages in the queue after creation of the poll

a. Once the poll is created, create an entity for notification with the details of participants i.e. e-mail ID and cell number, twitter ID and password of poll creator to put status on, type of notification i.e. e-mail, SMS and tweet.
b. Form the message to be sent to the participants with poll URL. For SMS and tweets, considering the limitation of message size, convert the URLs into tiny URL using TinyURL API.
c. XML serialize the notification entity and encode it to form a string message.
d. Create notification message queue using StorageClient API and set queue properties.
e. Put the message in the notification message queue.
f. If any of the above steps fails then get the error/exception details and write it to Azure table storage.

2. Processing of the notification by the Worker Role

a. Worker role keeps running in the background and checks if notification message queue has been created or not.
b. Create a dead letter queue to store the message which could not be processed by the worker role and wait until any queue is created in the storage account.
c. If notification message queue exists then keep checking if there is any notification message in the queue after specified time interval.
d. If there is any message in the queue, worker role will pick it up for processing.
e. Deserialize the message in the Notification entity and get message details.
f. Check the notification message to process it ; if it contains twitter authorization key, then call twitter notification service from the webrole to put status on specified twitter account.
g. If the notification message contains an SMS message, then call SMS notification service from webrole to send SMS to specified mobile number.
h. If the notification message contains e-mail IDs, then call Live notification service from webrole to send e-mail to specified e-mail IDs.
i. If any of the above steps fails then get the error/exception details and write it to Azure table storage.

3. Sending messages through various notification services to the participants

a. Notification services are RESTful WCF services created in the webrole for various notification mechanisms in MYOC. Whenever a service is called with valid inputs, it will call the concerned method to send invite notification to the participants.
b. Follow the link to know how a tweet will be sent to a specified twitter account
c. SMS service will send SMS to specified mobile numbers
d. Follow the link to know how an e-mail will be sent to specified e-mail IDs
e. If any of the above steps fails then get the error/exception details and write it to Azure table storage.

Offloading the notification transaction to a background process has helped in improving the overall throughput of the application. Architects and developers need to keep an eye on scenarios which exhibit such characteristics in their overall process and simultaneously design to offload such activities to an offline mode

December 10, 2009

Win 7 - Multi Touch

In my earlier blog I had touched upon some high level concepts on touch support for applications, that is now available with Windows 7. In this I will spend some time on multi-touch and few other points around support as part of .NET and on Surface.

So when we say multi-touch, what does it really mean? This is also where the basic touch to mouse promotion and real multi-touch differ. Multi-touch means the ability to detect multiple touch points at the same time on the touch hardware and be able to program against each of them independently. In a mouse driven world, there is single point of click and hence controls really respond one at a time. With multi-touch however we now are capable of programming against multiple controls at the same time. While most multi-touch samples/demos show usage of multiple fingers, what you should realize is that it is now capable of supporting multiple people interaction. A behavior, which MS Surface, demonstrated very well.

In my earlier blog I talked about points around WM_TOUCH and WM_GESTURE. An important addition over this in WPF 4.0 is the manipulation events. These essentially help in performing pan, zoom and rotate type of behaviors. The manipulation events are fired if the control requests for the same by setting its IsManipulationEnabled property to true. Programmatically you then typically handle the delta manipulation events and manage the transforms on the specific control. These will be scale transform for zoom, translate transform for panning and rotation transform for rotate. Check here for some details and samples on these concepts.

In the PDC 2009, MS also talked about how Surface controls and WPF controls are headed for common underlying code for supporting all these kind of behaviors. Few months back in one of our discussions around future of Surface we were talking about common issues people are facing with Surface and the 3 prominent ones were

  1. Cost of surface table
  2. Surface is horizontal while most apps would prefer a vertical display
  3. Restrictions on access to Surface SDK

It is really interesting to note that MS is addressing all of these and you might want to check the announcements made during PDC. See this video.

While all this is definitely interesting and exciting, to me this also very much seems like the Gartner Hype cycle (Also see here). As the technology innovations are happening, in the initial days/months, there will be a tendency to try this out for just about any application. There, however will be challenges and some of points listed by Bill Buxton in his blog here highlight them. I personally like the statement he makes

"if the finger was the ultimate device, why didn’t Picasso and Rembrandt restrict themselves to finger painting?"

This very clearly brings out an aspect that there will always be specialized devices and not all can be replaced by touch and natural gestures just because they are most efficient to use. Another simple example will be the speed of typing vs. the speed of writing and hence any application that uses text input even in moderate sense, will continue to require keyboards. The keyboards may become online/virtual, but that does impacts the real estate available to put other application specific items.

Finally, I still do believe that innovations will keep happening and we will definitely see more and more touch hardware and such applications.

Dynamics Unbound - The 'Clouds' in 'Azure' Sky

Azure is the hue that is halfway between blue and cyan and is generally used for clear skies. With the cloud computing services being provided through the Microsoft Azure platform; am I missing a point. I guess Azure is the platform where the clouds can venture out and in this case in a positive way provide a lot of options to Developers, ISVs, SIs, IT teams and business community at large to end of day lead to “Incremental benefits” for all stakeholders viz. a familiar development experience, on-demand scalability and reduced time-to-market for applications. Microsoft Dynamics has also jumped on this bandwagon and this is what is the most interesting part – as they are one of the early vendors to talk about a hybrid environment of ‘on-premise’ and ‘cloud’ and certain scenarios in which this can work together.

Before going into the details, it’s good to see at a high level what this means. Microsoft site for Azure  has the diagram as shown below

Azure Architecture

These offerings are from 3 products –

  1. Windows Azure - providing a scalable environment with compute, storage, hosting, and management capabilities. It links to on-premises applications with secure connectivity, messaging, and identity management.
  2. SQL Azure - a Relational Database for the Cloud.
  3. AppFabric - makes it simpler to connect on-premises applications with the Cloud. AppFabric offers identity management and firewall friendly messaging to protect your assets by enabling secure connectivity and messaging between on-premises IT applications and cloud-based services.

Given these tools are available the possibilities are simply endless. Microsoft Dynamics team has published the case for payment services using Microsoft Dynamics. You can check the details here  . The idea here is that credit card processing need not be done manually (with a credit card terminal) and the invoices updated back in Microsoft Dynamics AX. One can simply call the payment service provided through the application (after a one time registration) and process the credit card. Even if the user has an existing account; they can simply select a gateway provider during registration and get the process going as it was earlier but in a much integrated and error-free environment. And of course this is completely compliant with PCI (Payment Card Industry) DSS (Data Security Standards). Due to tight integration with AX, all this information is still available in AX at the customer level helping in reviewing the customer order and payment history and the associated transactions.

Thinking beyond this – this architecture can be used for any business process that needs an actor/activity outside the core ERP application which is in-premise. This can be around event management, GIS data integration, Maps and related services, B2B business needs, collaboration with dissimilar systems (Vendor and Supplier Collaboration) using a common ‘intermediary’ cloud-based application, campaigns on online portals moving the leads to backend systems, placement agencies specializing in outsourced hiring, customer feedback collection, organizational level messaging (SMS, etc.) and so on.

It’s a matter of applying creative thinking to define the possibility from the SI’s perspective and it’s a matter of identifying the business need from the client’s perspective. The place where the two meet we will have a game-changing solution in place. This is an opportunity and its implementation a solution to many of the business needs.

From a pricing point of view, Microsoft has provided a number of options. These are detailed here . These include both “Pay-as-you-go” and “Fixed-Price-Fixed-Capacity” options. There are some special introductory free offers till 30-Jun-2010 as well.

May we say - Let the ‘dynamics’ ‘clouds’ float in the ‘Azure’ sky!


December 2, 2009

AppFabric (earlier “Dublin” + “Velocity”) as .Net Application Server

As multi tier architecture became more and more main stream, application tier has key role in hosting and scaling business functionality. E.g. in a Retail Banking application, business functionality could be Loan approval process (workflow) or Money Transfer service. To scale such applications, business logic is usually hosted on independent tier called as an Application Tier using Application Server.

Apart from hosting business logic, Application server performs several other functions like

· Persisting a long running Loan approval workflow so that it can be dehydrated  from memory when not being worked on and rehydrated again when needs to be acted.

· Ability to automatically start multiple instances to serve increased users during peak load conditions and reduce running instances during off peak hrs.

· Cache data items like user profiles, etc.

· Ability to find out transactions which may not have gone through because of the invalid data inputs or criteria e.g. Loan approval failure due to failing a specific criteria.

· Ensure secure access to services like Account login, Balance check, Money transfer, etc.

· Reporting on number of requests fulfilled successfully, error conditions, etc.

If we abstract out the functionality from the above bullets the application server performs key functions like Hosting, Persistence, Instancing, Caching, Security, Messaging, Monitoring and Management, etc. helping build highly scalable and available applications.

In J2EE world, this business functionality will be hosted in application server like Oracle BEA WebLogic though Entity or Session beans providing OoB Entity life cycle management, persistence, scaling, etc.

In Microsoft legacy VB, COM+ world, COM+ MTS servers used to provide such functionality. With .Net, the business functionality was usually written using Windows Communication Foundation (WCF), Windows Workflow Foundation (WF) or using APIs from System.EnterpriseServices. Each of these technologies has built in APIs for hosting, persistence, instancing, and security which can be used to mimic Application Server functions with support from IIS. However, it needed more than some effort to support some of the above scenarios through integration between WCF 3.5, WF 3.5 and IIS 7.0 stack because of lack appropriate application server product.

It’s been sometime Microsoft didn’t have any single coherent product, which qualifies as an Application Server in .NET stack. With Microsoft stack penetrating enterprise scale apps, it was high time to have a true middle tier application server product. With PDC 09, Microsoft has announced AppFabric (earlier Dublin) for hosting WCF, WF services providing the entire gamut of application server functions listed above.

At this point, AppFabric is in embryonic stages (beta 1) and is a web download that gets installed on top of IIS 7 as an add-in.

App Fabric can be installed only on Windows Vista, Windows 7, and Windows Server 2008 and needs IIS 7.0 and above, SQL Express or SQL Server 2008 for data persistence. Monitoring is available only for .Net 4.0 WCF and WF services. Certain functionality like Autostart service instance is available only on Windows 7 or Windows 2008 R2.

AppFabric will also be host for Cloud services and provides distributed caching service framework earlier “Velocity”. Check my earlier blog on Velocity here

The App Fabric provides the dashboard view to manage the Workflow instances persisted, WCF calls, and various knobs to control these. From Dashboard, one can drill down to running workflow instances and suspend, resume, cancel it and vice a versa.

With AppFabric, IIS 7.0+ and Windows Server 2008 it will help strengthen the middle tier and help easily build highly scalable on-premise apps.

December 1, 2009

Can ERP be a “Plug and Play” Replaceable Engine?

Recently, I came across an interesting IT strategy of using the ERP solely as the processing engine keeping it largely independent of the end-user UI needs, the presentation layer and business specific customizations if any. The whole idea was to insulate the end users from any impact of future changes in the IT strategy of the organization in terms of changing the technology platform and/or the ERP vendor.

The strategy basically translates into having a common future proof platform for the end-users leveraging the core ERP functions such as GL/AR/AP etc. and incorporating  business logic level extensions in a client specific “Logic Layer”. The presentation layer is largely independent of the package except where it cannot be totally avoided.

The pros and cons that I could think about this approach  are:


  •  De-coupling the IT road map of the organization with that of the ERP vendor
  •  Minimizing the support needs for the ERP other than the regular hot-fixes.
  •  No end-user training needed (provided the presentation layer is already in place)
  •  Having full control on the Business logic layer that is kept outside of the ERP package
  •  Future Proofing the customization logic


  • Easier said than done!!
  • The core ERP UI would still be needed for key users , so this is not totally plug and play.
  • Developing the business logic outside will be tedious. This might be  feasible only in cases where critical and unique business scenarios are at a bare minimum.
  • Would need multiple development tools beyond the ERP specific ones.
  • Replacing the ERP though theoretically possible will need significant effort to ensure the underlying functionalities work as desired

I did take some time to assimilate this as I have been “tuned” to think of the ERP as the core system. Making that dispensable/secondary was a bit difficult to digest in the first instance. But after giving it a deeper thought this did make sense in specific scenarios. I myself have been part of implementations which obliquely touch upon this concept, but have not come across an implementation which can direclty reflect this strategy.Googling these key-words  returned 106,000 results!

There seems to be lot of thoughts/ideas floating around this. How practical this is and how soon can we see some live examples is still to be seen. Till then, the question in the title of this post still remains largely unanswered for meSmile

It might be a good approach to think of in Greenfield implementations of ERP and having a common presentation layer for all the users. Possibly, the on-boarding of users in such cases can also be staggered with a plain vanilla application deployment to begin with followed by standard UI  and a business logic layer….

If you have come across something on these lines, I would look forward to hearing back from you.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter