Infosys Microsoft Alliance and Solutions blog

« December 2008 | Main | February 2009 »

January 29, 2009

Calling Blueprint Command from .mht file

In the last blog we saw how to create WF process workflow for your blueprint to provide workflow based guidance. We also saw Workflow window where Activities are listed on left hand side showing whether those are ready to execute or blocked whereas right hand side was used to explain details of each activity as shown below.

 

Initial WF

 

 

 

 

 

 

 

 

 

 

 

When I was working with this workflow, I realized that ,this is just Visual representation of Activities we have achived so far. To Run the activity I have to go again to project, right click and click Menu – related to each activity as shown below.  

Initial Menu 

 

 

 

And here we don’t have any control on execution as Menu does not check whether activity related to it is ready to execute yet or blocked. Is this killing the purpose of having functionality supported to show activities in colors indicating their status?


Ofcourse there has to be some way to restrict execution of blocked activity unless all its parent activities are marked as complete. After exploring, I found that to have such restriction through menu is currently not possible but there is other way we can achieve it.


What if we have a link in my .mht file which is shown for each activity in workflow window. This link should also allow me to run command associated with Activity. Link will work only if activity is ready to execute (Green in color).


So I opened ‘AddDBConnection.mht’ file and added on hyperlink there say ‘Execute’.  Address set for this hyperlink was as follows


blueprints://60db6b1b-5562-4bd7-9773-f8dc8fa3fc32/


 ‘60db6b1b-5562-4bd7-9773-f8dc8fa3fc32’ is guid associated with blueprint command which you need to execute for given activity. This guid can be easily obtained from file ‘Commands.xml’ in ‘properties’ folder of Blueprint project. Note that guid will be generated only when you will build your blueprint project.


Now workflow window will look like as shown below

Final WF 

 

 

 

 

 

 

 

 

 

 

If you click on execute for ‘AddDBConnection’ activity it will execute properly as activity is Green and ready to execute.


But if you click on execute for ‘AddDataEntities’ activity it will show following message as activity is Red and blocked as long as ‘AddDBConnection’ is marked as complete by checking checkbox.

Blocked Message

 

 

 

 

 

When you click ‘Details’ button to know whats happening it will show you following window

Parent Activities

 

 

 

 

 

 

I truly admire this wonderful feature provided out of box by blueprint. This allows us to run activities directly from .mht file instead of providing menus. In this way activities become truly dependent on each other restricting developer to do anything wrong. Ofcourse this will hold true only if he always use blueprint activities only to complete his/her task. If developer does any activity manually we cannot get the information whether it is completed or not.

 

January 27, 2009

Should developers touch XAML?

The other day I hit upon this blog - I hate it when a designer touches XAML.. and honestly I was surprised by the title. To me designers are the ones who should be creating the XAML (may not hand craft it and use tools like Expression Blend), and developers are the ones who should be working on the code behind and writing the backend logic. Hence I felt a more apt title is "Should developers touch XAML?"

However if you read the above blog by Scott, it has merit and definitely there are issues with respect to appropriate working together of designers and developers. However there is another angle to it and that is more evident if you have attended (or seen the recorded session) of the talk Seema had given in PDC 2008. You can find the link to it in my this blog.

Tools like expression when used to generate XAML may not generate the most efficient XAML, especially as you try to record some animations, or create grid columns by dragging column separator or create animations etc. The various values that get written in XAML can definitely be subject to more optimization to help gain a little more runtime mileage.

And hence develper may also need to edit the XAML and in this case, it may literally be editing the XAML in XML view in VS/Expression and not in designer view. However this can lead to subtle issues as well, which you should be careful about. See a similar connect feedback item I had logged for Expression generated animation here.

Personally, I don't completely buy into the MS comment on the connect issue. It would be interesting to hear your thoughts on the same.

January 21, 2009

Consuming .NET Services on the Windows Azure Platform

Microsoft .NET Services as part of Azure Services platform, offer building blocks which provide the necessary infrastructure to develop cloud aware applications. One scenario we wanted to try out was to be able to have a on-premise services to be consumed by an application hosted on Windows Azure.

Firstly, to have an on-premise service made available for  consumption by an external consumer, the service had to be made publically available. This is possible by using the connectivity capability of the .NET Services which allows for on-premise WCF type services to be seamlessly made accessible on the public cloud. And so we decided to use the .NET service supported configuration bindings in our on-premise service code.

We approached this assuming, for obvious reasons, that .NET Services (DEC 2008 CTP) being a part of the Azure platform would be supported on Windows Azure (Jan 2009 CTP). However we later found out that not all bindings of the .NET services platform were able to support this scenario.

In such a scenario, with the current release of .NET Services, we found that only “basicHttpRelayBinding/wsHttpRelayBinding” along with having the “RelayClientAuthenticationType” set as ‘None’ was supported. Other bindings cannot be used as they cause a conflict with the current version of Azure Platform.

Additionally what was also noticed was that the hosting .NET services using IIS is not supported in this release, which apparently had been available with the earlier release. Hence the use of IIS as a WCF service host had to be substituted with the likes of a custom hosting application such as a console or a windows service. Once the WCF service got hosted a proxy needed to be generated, the usual process followed by which any .net client can consume a WCF service, using svcutil command.

Interestingly the configuration file generated by using the svcutil utility defaulted the binding type in the service model configuration of the client to either a wshttpbinding or basichttpbinding.

Thus, allowing the Cloud-aware services to be consumed by Azure hosted applications.  

Go to http://social.msdn.microsoft.com/Forums/en-US/netservices/thread/12c846cd-07b0-4c46-a697-98ef7771e249, for more details

Comparing Objects for Equality

What do we do when we want to do something which we haven’t done before? Basic instinct of a developer says that Google and this is what I did. And what was i trying? Write a unit test case to compare two objects for equality. By equality here, I mean similarity of data present int he objects.The objects I had to compare had sub-objects, properties as lists and the usual primitive data type properties. Unfortunately .Net doesn't have any API which compares object for equality.

After googling I saw an implementation in Codeplex which very much suited what I wanted, but the drawback being that it didn’t work if the objects had properties as Lists. I further enhanced this piece of code to work for properties which are sub-objects, lists and primitive data types. Here is the complete piece of code – 

        /// <summary>

        /// This method compares two object for equality

        /// </summary>

        /// <typeparam name="T">Object type to be compared</typeparam>

        /// <param name="expected">Expected object</param>

        /// <param name="actual">Actual object</param>

        /// <returns>boolean value indictaing the success ir failure of the comparison</returns>

        public static bool CompareObjects<T>(T expected, T actual)

        {

            try

            {

                Type objectsType = typeof(T);

 

                //When called recursive T will be of object type

                if (objectsType.Name.ToString() == "Object")

                    objectsType = expected.GetType();

 

                PropertyInfo[] properties = objectsType.GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.GetProperty);

 

                foreach (PropertyInfo pi in properties)

                {

                    //Check if the object has a list

                    if (typeof(System.Collections.IList).IsAssignableFrom(pi.PropertyType))

                    {

                        System.Collections.IList listExpected = (System.Collections.IList)pi.GetValue(expected, null);

                        System.Collections.IList listActual = (System.Collections.IList)pi.GetValue(actual, null);

 

                        if (listExpected.Count != listActual.Count)

                            throw new Exception(String.Format("Objects do not match Expected Value {0} Actual Value {1}", listExpected.Count, listActual.Count));

 

                        for (int i = 0; i < listExpected.Count; i++)

                            CompareObjects(listExpected[i], listActual[i]);

                    }

 

                    //check if the property is a class,if class call CompareObjects recusively

                    else if (pi.PropertyType.IsClass && pi.PropertyType != typeof(string))

                    {

                        if (pi.GetIndexParameters().Length > 0)

                            CompareObjects(pi.GetValue(expected, new object[] { 0 }), pi.GetValue(actual, new object[] { 0 }));

                        else

                            CompareObjects(pi.GetValue(expected, null), pi.GetValue(actual, null));

                    }

 

                    object expectedValue = pi.GetIndexParameters().Length > 0 ? pi.GetValue(expected, new object[] { 0 }) : pi.GetValue(expected, null);

                    object actualValue = pi.GetIndexParameters().Length > 0 ? pi.GetValue(actual, new object[] { 0 }) : pi.GetValue(actual, null);

 

                    if (Convert.ToString(expectedValue) != Convert.ToString(actualValue))

                        throw new Exception(String.Format("Objects do not match Expected Value {0} Actual Value {1}", expectedValue, actualValue));

 

                }

            }

            catch (Exception ex)

            {

                return false;

            }

            return true;

        }

    }

January 20, 2009

Windows Touch and User Experience

One key new feature of the recently released Win 7 beta that should excite UX designers and developers is the support for building touch and multi-touch based interfaces.  Some time back, I had posted a blog entry about my first hand experience of using the Microsoft Surface computing device. These new technologies open huge opportunities for designers to transcend existing user experience limitations and build immersive, life-like interactive applications.

Relating back to the ALIVE design approach I had proposed some time back, touch and multi-touch capabilities are huge because they remove artificial interface elements like mouse and keyboards and allow more natural interactions.  They also make true collaboration possible by letting multiple people simultaneously manipulate stuff on screen. I think we can place this development in the same league as some other computing related advances like miniaturization, connectivity and vastly improved processing speeds – all of them having profound effect on  how, where and what  of computers and our daily lives.

I think two factors play a decisive role in helping make computers an integral part of our day-to-day life: content and interface.

The ‘content’ is the data or media that we consume or interact with using the computers.  Fancy interactivity and high tech multi-modal interfaces may create initial excitement, but would not take us very far if the content is limited, irrelevant, boring and restricted.  Touch based products like Surface computers are currently ‘content poor’ in that sense; and will depend on Microsoft partners to provide applications, games and widgets

In this Service Oriented Architecture (SOA) and Cloud computing driven world we are entering, access to quality content will progressively become cheaper and easier. With  ‘Mash-ups’ as a way of consuming information becoming popular, users are getting plenty of flexibility for juxtaposition, layering and blending of information - thereby opening unlimited opportunities for switching the context and perspective.  It is like a kaleidoscope!!

This is where the capability of the interface - to intuitively navigate through information, drill up or down as needed, and zooms in or out with just flick of fingers - will come key. The richness, and directness, of touch based interactivity will be great match for the rich, multi-faceted, multi-layered data we may be interacting with in the not-so-distant future.

Imagine configuring dream cars..., playing with your buddies.., creating art master piece…, manipulating  business data…or just plain adjusting the volume of your media player …all this and more without using mouse, keyboard, joy sticks and the likes.

As we know, keyboards inherited their form factor from the typewriter. Mouse and joystick had little historic stereotypes to rely on – but they had to evolve and grow as the ‘sidekicks’ of the omnipresent keyboard.  With all of these out of the way, the “computer and monitor on a table and the user on a chair’ paradigm is going to be seriously challenged.

Touch sensitive monitors may end up being more horizontal to facilitate natural walk-up-and-use mode.  And kiosks too will have opportunities to evolve from the typical wizard driven user interfaces to allow more open ended interactions when the tasks need them to. In the weeks to come, we will look at how, where and what will change as touch computing takes roots.

 

January 19, 2009

Default printer issue (Word has stopped working)

Mid last year, I had written about this issue where Word was crashing due to a default printer, which wasn't currently available since I wasn't connected to the office network. Windows 7 offers an interesting solution to this issue.

Did I say Windows? Should this not be really handled by Office 2007 service pack or maybe in Office 14? Maybe it will get fixed there also, but this feature in Windows 7 does offers a solution.

January 17, 2009

Windows 7 Gems

If you are playing the latest Windows 7 beta build, this bumper list of Windows 7 secrets by Tim Sneath will be useful. If you haven't downloaded the latest bits as yet, you can do that here.

January 7, 2009

Silverlight - EnableRedrawRegions is a savior !

The other day we were ready to deploy an internal facing Silverlight 2.0 built application. As a final round of testing, I thought to put to use the EnableRedrawRegions settings that I had just learnt from Seema's talk in PDC 2008 as part of her session - Building an Optimized, Graphics-Intensive Application in Microsoft Silverlight.

Interestingly, after the application loaded, I could see a small portion in the middle of the page continuously changing colors. It could only mean that this region was getting redrawn again and again. But why? Checking the code we realized that we had some animation at load time and once the data was loaded, the animating panel was just pushed to back in Z order and another panel came on top. But this still meant that the animation was happening, but we could not see it. This EnableRedrawRegions flag really helped immediately identify and remove unwanted animation (we set the panel's visibility to collapsed).

January 2, 2009

Creating Parent – Child Blueprints

Today we will see how to relate one blueprint with another as parent-child. This is very useful feature which Blueprints provides as it makes sure user has unfolded parent blueprint first before using child blueprint. It also helps to relate blueprints according to their functionalities. We can even control unfolding of child blueprint either along with parent blueprint or separate.

Steps to add related blueprint are very easy and straight forward.

First of all create two blueprints which you want to relate with each other. Suppose we have two blueprints named ParentBlueprint and ChildBlueprint.

Build both these blueprints. This step is important before defining relationship between two Blueprints as build process generates GUIDs which are required for parent-child blueprints relationship.

Now right click ParentBlueprint and open menu Blueprints ->“Edit Configuration”. Go to tab “Related Blueprints” in this and “Add” related Blueprint information for ChildBlueprint.

                                                                        

                                                                                                                                        

                                                                                                                                      

                                                                                                                                              

 

 

 

 

 


 


 

 

 

The information which needs to be provided is as follows.

  • Description: As name suggests it is description for related blueprint. In example we have taken it is “Child Blueprint”.
  • Workflow GUID: This is GUID which is used by Child Blueprint for showing WorkFlow command. You can get value of this command GUID from file “Commands.xml” in “Properties” folder of ChildBlueprint project.
    Command which is used for WorkFlow looks like

 

                                                                                                                                          

                                                                                                                            

 

Note that these entries are automatically added when you build blueprint. So building is necessay to get this GUID.

  • Unfold GUID: This is GUID which is used by Child Blueprint for unfolding project using Blueprint. You can get value of this command GUID from file “Commands.xml” in “Properties” folder of ChildBlueprint project.
    Command which is used for Unfold looks like

 

 

 

 

 

 

  • Unfold: If this is true Child blueprint will get unfolded automatically whenever Parent Blueprint is unfolded.
  • Dependent: if this is true Child Blueprint is marked to be dependent on Parent Blueprint.

Build ParentBlueprint now and you should see Parent-Child relation in Blueprint Manager as follows.                                                                 

                           

 

                       

                       

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

                                                                                                                                                     

                                                                                                                                   

                                              

                                                                             

                                                                                                                               

                                                                                         

 

 

 

 

                                                                                                                                                                     

 

Is Oslo going to make the role of developer obsolete?

Check a lively discussion on stackoverflow at http://stackoverflow.com/questions/270401/is-oslo-going-to-make-the-role-of-developer-obsolete

Once in every few years we come across vendors who make such bold predictions. In this case MS has not made any such predictions but hysteria is already being generated in the community.

My opinion is Oslo kind of technology and integrated tools are a way forward in the software engineering and would make developer job easier. These tools would help us focus on solving unsolved problems instead of solving the repetitive problems many a times. 

What do you think?

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter