The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

« June 2011 | Main | August 2011 »

July 29, 2011

Step by step approach to expose on-premise database using Azure infrastructure - Part 2

In the last blog we understood the usage of Azure-connect to expose on-premise SQL database and accordingly the points of concern while doing that and also the benefit. In this blog we will understand another approach using Azure appfabric Service Bus.

Option 2 - Expose the on-premise database to the consumer (WCF service) in the Azure webrole (or any external consumer) and over HTTP(s) using Azure appfabric Service Bus

Later in the post, we will understand the different obligations/benefits that one should take into consideration before making the decision for this approach.

Steps to be followed

1.     Install the ADO.NET Entity Framework. Currently the latest is 4.1 and could be installed from here. This will add the required supporting libraries and the Visual Studio templates.

2.     Create a WCF Service Application project and make sure to set the target framework to 3.5 or higher. To the newly created project add a new ADO.NET Entity Data Model project item:


3.     In the Entity Data Model wizard, choose "Generate from database" and click "Next":


4.     In the next screen, click on Connection and in the pop-up provide the Server Name,  Authentication details and then select the concerned database:


And once connection to the SQL server database is established successfully, on click of OK, the connection details are shown:


5.     Select the table(s) of concern and click on Finish to show the resultant entity model details:



6.     Once the data model is created for the concerned tables, these tables need to be exposed as OData RESTful interfaces, and for this add a new WCF Data Service project item to the project:


7.     In the newly added service code behind file (e.g. WcfDataService1.svc.cs), there is a placeholder for the database entities created earlier in step 4 & 5:


Substitute with the correct object defined in the data model code behind (e.g. Model1.edmx à Model1.Designer.cs inherited from ObjectContext):




8.     To define the rules for the different entities for their availability for operations like readable, updatable, etc, config.SetEntitySetAccessRule could be used. For example to allow all operations on all entities:

config.SetEntitySetAccessRule("*", EntitySetRights.All);



9.     Now it is required to expose the so-formed service endpoint to the client outside the corporate network (firewall) using Azure appfabric service bus. For this it is needed to explicitly add a service with end point binding -" webHttpRelayBinding"  to the project configuration file:


      <service name="ExposeOnPremiseSQL.WcfDataService1" behaviorConfiguration="SQLExposeServiceBehavior">

        <endpoint name="RESTEndPoint" address="" behaviorConfiguration="webhttpSharedSecretClientCredentials" contract="System.Data.Services.IRequestHandler" binding="webHttpRelayBinding" bindingConfiguration="webHttpRelayEndpointConfig" />






        <binding name="webHttpRelayEndpointConfig">

          <security mode="None" relayClientAuthenticationType="None" />






a.     services-bus-namespace- the appfabric service bus namespace created in the Azure account subscription though which the service will be exposed

b.     webhttpSharedSecretClientCredentials- the service endpoint behavior having the credentials to be used by the service to authenticate and authorize itself to use the Azure service bus:


      <behavior name="webhttpSharedSecretClientCredentials">

        <transportClientEndpointBehavior credentialType="SharedSecret">


            <sharedSecret issuerName="issuer-name" issuerSecret="secret-key" />



        <webHttp />




Issuer-name and secret-key are like user-name and password generated during the creation of the service bus namespace to authorize a service or client for service, to expose service through service bus or call a service endpoint exposed through the service bus respectively. webHttp is to support RESTful WCF service.

c.     System.Data.Services.IRequestHandler- the contract for the WCF service to be exposed as OData WCF Data Service.

10.     Makes sure to comment:

<serviceHostingEnvironment multipleSiteBindingsEnabled="true" aspNetCompatibilityEnabled="true" />


Because Azure service bus currently doesn't support multiple IIS bindings per site.

11.     Once the above configurations are done, try to browse the WcfDataService1.svc


        e.g. http://localhost:20918/WcfDataService1.svc/ .

This will list all the database tables set to be exposed through the data service (in accordance with the step- 5 above). This operation will also register the service to the service-bus with the configured end point address. E.g.: (will also list the table(s) available to be browsed)

Now giving any table name try to view its contents as:

12.     While hosting the so-formed service in IIS, make sure to enable auto-start for the service.

Points to be considered

The below few paragraphs will try to highlight some points that may be considered while making the right decision:

1.     Since the WCF data service is bound to the table schema, in case there is any change in the table schema in future, the new entity data model in liaison with the new schema needs to be updated explicitly in the WCF data service otherwise the response may not be as expected.

2.     Since the RESTful WCF service is exposed through Azure appfabric service bus as the intermediate layer, message mode security will be more applicable than the transport mode based security.

3.     Once the service is deployed in the on-premise IIS, make sure to enable auto-start for the service. When any wcf service is hosted in IIS, the service is ignited /started and made to be consumable only when the first request is received. But in this case, service in concern needs to be started before even any request for the service is made. This is because only once the service is started, the service is registered to the service bus and a public URL is exposed which a client in the public (internet) may refer to while making a service request. To achieve this we need to enable the auto-start for the service i.e. the moment the hosting service is started (i.e. IIS), the service hosted also gets started.

4.     When we are exposing an on-premise service through Azure appfabric service bus there is no concept of load balancing even at the service-bus end, as there will be only one instance running to respond. And more over in this scenario, multiple IIS bindings per site is also not allowed currently.


1.     Without making any changes to the on-premise machine having the SQL server, the entire data can be exposed and consumed from outside firewall, i.e. even the physical access to the said machine is not required.

2.     Since now windows integrated authentication can be used to connect to the SQL server instance in concern from the on-premise service, proper access control can be put in place.

3.     Defining proper service endpoints, both internal and external (to corporate firewall) clients for the service could be handled independently. For example over scheme "http" for internal and "https" for external clients.

4.     With this approach, while defining the entity data model, even the access to the database could be restricted in terms of type of accessibility. For example only "read" access to the clients outside of corporate firewall.

5.     One can easily consume these OData services by making a service reference in the .net client application. This is will create proxy having container for each of the database tables and using simple LINQ query like the below we can fetch the data

var result = from entity_name in new service_proxy.dataEntityModelNameContainer(new Uri(serviceUrl)).table_name

where entity_name.property_name == <property value>

select entity_name;



Big Data and Cloud Computing

It is well known that leveraging the Cloud for high computing work loads for a short span of time is a good business case.

Getting Business insights from Big Data is becoming main stream. Cloud is becoming an ideal choice for that.


What is Big Data?

With the advent of Web 2.0, Social Networking, Mobile Computing, Location Based services, Sensor Networks etc. there is huge amount of data explosion.

 These new breed of data which are typically un-structured or semi structured with huge volume and complexity to handle are termed as "Big Data"


Why analyze Big Data?

Across industries it is becoming important to make sense out of this data and gain competitive advantage.  Below are the few use cases of Big Data:

·         In the Telecom industry, Operators generate 5 to 10 TB of call record day every day. These Big Data can be mined for marketing campaign, network optimization.

·         In the Retail Industry, pricing is a very important attribute. Big Data helps to compare pricing across retailers and product categories and set the optimum price.


Why Cloud Computing for Big Data?

Traditional data management and business intelligence Infrastructure and Tools are good in handling structured and semi-structured limited volume data.  But, they are inadequate to handle the "Big Data".

Cloud infrastructure and a new breed of software ecosystem are powering this Big Data capturing, managing and analyzing challenge.


One common solution is to use Apache Hadoop software stack deployed in the Amazon AWS Cloud EC2 and S3 environment. Hadoop is based on Google's open-license Map Reduce programming model.

Also, Microsoft has recently launched Azure Cloud Big Data service code named Project Daytona.


Some of the challenges which need to be addressed in such a solution are huge data transfer to the Cloud environment, ensuring confidentiality of the data.


Does your enterprise have a plan to get business insights by using Big Data and Cloud?

July 19, 2011

Step by step approach to expose on-premise database using Azure infrastructure

With the advancement of Azure cloud infrastructure, there arise many heterogeneous requirements which are of type of system(s) having combination of on-cloud and on-premise components. Specially from the on-premise database point of view for certain scenario, this blog series is intended to explain the options, steps, concerns and benefits of different approaches.


Lets us consider an existing on-premise scenario where there is a system consisting of database(s) server(s) and few WCF services in application server(s). And the requirement is to move these services to the Azure infrastructure (probably also to support some prospective external service consumer and side by side to leverage the benefits of cloud infrastructure like low cost of ownership, low cost of maintenance, etc) and keep the database(s) on-premise (i.e. within corporate firewall may be because of some constraint/policy of the company) and expose it to the said WCF services which are in Azure deployed as web role. The option may not be the only one but certainly the among the quick-to-adopt and the best ones with some benefits.

Later in the post, we will understand the different obligations/benefit that one should take into consideration before making the decision for this approach.

Option 1- Expose the on-premise database to the consumer (WCF service) in the Azure webrole and over TDS using Azure-connect.

Steps to be followed

To keep the backend database on-premise and expose it over "TDS" protocol so that once the concerned WCF services are moved to Azure webroles, the code logic to access the database is not required to be changed (i.e. using SQL client API), we need to create a kind of "local virtual network" with IP-sec protected connections between computers (on-premise database server) and virtual machines (hosting the WCF Azure web role). For this to achieve, Azure-connect could be leveraged.

1.      Login to the Azure management portal

      2. Select "Connect" icon from the top panel:


3.      Select the subscription under which the said virtual network is to be created and where the WCF Azure web role will be deployed:


4.      Click the "Install Local Endpoint" icon from the top and then copy the URL provided from the pop-up:


 5.    Now go to the physical machine where the database is present, try browsing this URL and install the Azure-connect local end point:


6.      Once the end point is installed in the machine, in the task tray one icon will be shown but with message "limited connectivity" because the virtual network is yet not created


7.      Now in the same SQL server machine, in the firewall open, the TCP port 1433 for inbound request to be allowed:



8.      For the SQL instance in concern, allow remote connection:



9.      Now we have to configure the webrole to be used to host the WCF service to connect to the same virtual network under the concerned subscription. To do this in the Azure management portal, select the "Get Activation Token" from the top panel and the copy the token provided:



10.      In the Visual studio open the solution having the WCF project and add a blank Azure Cloud project (make sure to install the Azure tools for visual studio):


11.      In the newly added blank project go to the "Roles", right click on it, select "Add" and select "Web Role Project in solution..." menu:


12.      This will list the WCF project, select it and click "OK". This will add a role entry in the initially added blank Azure cloud project.

13.      Right click on the newly added entry in the "Roles" folder select properties menu to be shown. Then go to the "Virtual Network" tab, check the "Activate windows Azure connect" and provide the activation token copied in the step-9 above:


14.      In the WCF project, modify the database connection string to include the user name and password for SQL authentication as windows integrated authentication will not work in Azure-connect network. So make sure to enable mixed-mode authentication in the SQL instance in concern.

15.      Build the cloud project and deploy it to Azure. Once the role instance(s) is (are) started, in the Azure management portal (for Azure connect), for the concerned subscription, select "Activated endpoints". It will list the recently deployed web role (s) and the machine(s) name in which the local end point is installed (step- 5):


16.      Now we need to create a group and then only the interconnection between the different role instances and machines can be established. Select the "Groups and Roles" menu and click on the "Create Group" icon from the top panel:


17.   In the "Create a New Endpoint Group" window "Add" the different machines (having the end point installed) from the first section and the web roles from the second section:


18.   Once the group is created successfully, in the machine(s) having the Azure-connect end point installed, in the task tray, the azure-connect icon will change to:

july_1_15.png In case status is not shown as connected, right click on it and select to refresh the policy:


19.      Once the connection is successfully established try consuming the WCF service deployed in any client and make the required service operation call.

20.      Inter-connection could also be verified by enabling remote connection in the web roles VM as well as the on-premise machine having the database and trying to "ping" each other. Make sure to run the following command (in the elevated command prompt) in each of the machines (om-premise machine having the SQL server and VM having the WCF web role) before trying to ping:

netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6

Points to be considered

The below few paragraphs will try to highlight some points that may be considered while making the right decision:

1.      We need to have the admin right and direct access to the physical machine(s) where the database(s) is(are) residing (third party or internal)  to install Azure connect end point which is needed so that these machines could be linked in the virtual network which is also having the VM hosting the wcf web-role.

2.      Need to open the TCP port 1433 for inbound requests in the firewall of the machines having the databases. For this also we need to have the admin right and direct access to the physical machine(s).

3.      The connection string being used by the wcf service (in the azure web role) to connect to the on-premise database needs to have user name and password defined as windows integrated authentication will not be supported. Hence need to encrypt the concerned connection string properly.

4.      Since the database is on-premise and the consumer of the database i.e. wcf service is in azure role, the data is sent over the wire and hence it needs to be encrypted or setup needs to be in place to avoid illegitimate data access/purging.

5.      An extra latency will be incorporated now as the database server and the app-server hosting the wcf service are not on the same physical network.

6.      If the service to be moved depends upon on-premise resources other than database like Active-Directory, SMTP server, etc then we need to look for the corresponding substitute (or needed to enable these as well to connect to Azure-connect virtual network) which could then be accessible from azure web role.


One may use/retain (in case of existing set-up) the simple SQL client API based database access code logic to connect to the on-premise SQL server from consumer outside the corporate firewall.

Next post in the series

Option 2 - Expose the on-premise database to the consumer (WCF service) in the Azure webrole (or any external consumer) and over HTTP(s) using Azure appfabric Service Bus... 

For the complete next post, please refer to this.