Data & Analytics - IFI Techsolutions https://devifitechwebsite.azurewebsites.net Microsoft Solution Partner Tue, 02 Dec 2025 07:11:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 /wp-content/uploads/2025/10/cropped-ifi-new-logo-icon-32x32.png Data & Analytics - IFI Techsolutions https://devifitechwebsite.azurewebsites.net 32 32 Machine Learning in Power BI https://devifitechwebsite.azurewebsites.net/blogs/machine-learning-in-power-bi/ https://devifitechwebsite.azurewebsites.net/blogs/machine-learning-in-power-bi/#respond Tue, 24 Dec 2024 11:26:59 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=14636 Author – Sana Khatoon Data Engineer What you will take away from this blog- Get familiar with Azure Machine Learning Studio Workspace How to create experiments Calculate Mean and Standard Deviation on MPG data for various automobiles Integrate Machine Learning Model in Power BI Desktop Visualize your model Prerequisites- Requires Azure Subscription for creating Azure […]

The post Machine Learning in Power BI first appeared on IFI Techsolutions.

]]>
Author – Sana Khatoon Data Engineer What you will take away from this blog-
  • Get familiar with Azure Machine Learning Studio Workspace
  • How to create experiments
  • Calculate Mean and Standard Deviation on MPG data for various automobiles
  • Integrate Machine Learning Model in Power BI Desktop
  • Visualize your model
Prerequisites-
  • Requires Azure Subscription for creating Azure Machine Learning Studio Workspace
  • Install Python on your system
  • Install AzureML library using python command
  • Understanding of Linear Regression algorithm
Azure Machine Learning Studio Workspace            To develop machine learning model, you use different kind of sources for datasets, do some transformations and manipulation on the dataset. You analyze the dataset with the help of some statistical functions and visualize those outputs as a set of results. With the help of Azure Machine Learning Studio, all the above features have come under single workspace.  It gives you an interactive way of building, testing and deploying your Model by just using drag and drop. Isn’t it cool? It doesn’t require any coding knowledge; you just drag and drop your datasets and analyze the modules on an interactive canvas. You can edit your experiments, save it and run it again. If you run your model, the canvas gets split into two parts: training experiments and predictive experiments. After that it can publish as a web service. Create Azure Machine Learning Studio (classic) Workspace Step 1: Go to Azure Portal, search for Azure Machine Learning Studio (classic) Workspace and create Step 2: Under the Resource Group name select your resource group name if you have already created or create a new one Step 3: Give the name of your Workspace under Instance details. In my case it is MLForPB Step 4: Now, click on Storage Account. It will ask you to enter a name and choose performance.  We need storage account to store our output files. Step 5: Now, Click on Web Service Plan. Here you will require a web service, it is used to operationalize your model and generate predictions from the pre trained model. Step 6: Now, select the Dev Test Standard under the web service plan pricing tier  and Click on Review + Create Now, go to your Azure Machine Learning (classic) Workspace and launch your studio as follows- Here you will see a new window as follows. If you are coming first time to this window, you need to login. Now, click on my Experiments, it will redirect you to the project environment. Creating a New Experiment Now, let’s create a new experiment by clicking on blank experiment Once you click on blank experiment, you will get the below screen. This area is called Canvas, you can simply search the content and drag it to the canvas. We are going to build a model to find Mean and Standard deviation on MPG data for various automobiles then we will integrate and visualize it in Power BI Desktop Step 1: Search for MPG data for various automobiles and drag it to the dataset canvas Step 2: If you want to see the dataset what you have selected, right click on the 1 which is circle around on dataset and select visualize option Now, we need to do some transformations. Here we will eliminate the columns that will not require in future. I chose to eliminate HorsePower and Model columns. Step 3:  To eliminate column search for Select column dataset and drag it to the canvas then click on Launch column selector and select columns as per your model’s requirement After that we need to split the dataset into training and testing in the ratio of 8:2 Step 4: Search for Split data and drag it to the canvas. Now, give 0.8 in the Fraction of rows field, this means you are giving 80% of data to the training model Now, we would require a model to predict mean and deviation, for that we will use Bayesian linear regression Model. Step 5: Search Bayesian Linear Regression and drag it to the canvas. Now we need to connect it to the training model to the first node and second node of training model will connect to the Split data. Basically, what we are doing here is to provide 80% of your data to the training model and with the help of Bayesian Linear Regression statistics, it will evaluate the results. Now, the remaining 20% of dataset will go under testing. Step 6: Also, we will require a single column that needs to be predict. So, when you click on the train model you will get screen like below. Click on launch column selector Step 7: Select MPG column Step 8: Now, the final step is to score our model. So, search for score model, drag it to the canvas and connect one node to train model and second node to the split data which contain 20% of test dataset Step 9: Now, click on Set Up Web Service and select Predictive Web Service to the model. As soon as you click, they can get split into two parts: training experiments and predictive experiments Step 10: Now, Click on Deploy Web Service and select Deploy Web Service Preview Step 11: Now, after the successful deployment you can check the results. In my case Mean is 8.88 and deviation is 2.92 as per my input data Integrate in Power BI Now, it is time to go final destination i.e. Power BI. You need to install Power BI Desktop on your system and open a new File and Go to Transform data. It will open new window called as Power Query Editor. Note: You need to first load the data into Power BI Desktop otherwise you won’t get option of Azure Machine Learning in Power Query Editor Step 1: Now you are into Power query editor window, here you can do all the transformations. We need to remove columns those are not required and rename the column as per our original dataset. Step 2:  Click Azure Machine Learning option as shown- Step 3: When you click on Azure Machine Learning, it will load your model that you have built in Azure Machine Learning (classic) Workspace. Click on OK Note: Privacy policy should be same at both side and you need you install AzureML library or else you might get an error of compatibility issues. Step 4: As soon as you click on OK, the model will get start loading and you will see the results in the form of [Records]. You need to unbind the records by clicking on the double headed arrow as shown below- Step 5: All the columns that are present in your model along with Mean and Deviation will populate. Click on OK, you will get the results Step 6:  Now, click on Close & Apply Visualization In Power BI Step 7: Now, go to visualization view and create your visuals Step 8: Here, I have created two visuals-
  • Page 1: In this visual, I choose bubble chart which shows the best fit line along with the outliers. The linear regression equation comes up with all the MPGs value is y=7.59x+14.43 which will vary when you change the MPG value.
  • Page 2: In the second visual, you can simply see the mean and deviation for each MPG value which is presented in Line Chart

The post Machine Learning in Power BI first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/machine-learning-in-power-bi/feed/ 0
Move and Extract data in batch’s using Azure Data factory https://devifitechwebsite.azurewebsites.net/blogs/extracting-data-using-azure-data-factory/ https://devifitechwebsite.azurewebsites.net/blogs/extracting-data-using-azure-data-factory/#respond Tue, 11 Jan 2022 12:21:23 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=15002 Author- Ashfaq Pathan (Data Engineer) Move and Extract data in batches using Azure Data Factory Prerequisite: Azure Subscription Azure Data Factory Azure Storage Account   Problem Statement: While we were moving all the data in the last 15 min from Azure File Share to Azure Blob Storage, the volume of the data would vary in […]

The post Move and Extract data in batch’s using Azure Data factory first appeared on IFI Techsolutions.

]]>
Author- Ashfaq Pathan (Data Engineer)

Move and Extract data in batches using Azure Data Factory

Prerequisite:

  • Azure Subscription
  • Azure Data Factory
  • Azure Storage Account

 

Problem Statement:

While we were moving all the data in the last 15 min from Azure File Share to Azure Blob Storage, the volume of the data would vary in every pipeline run. Eg: In some pipeline runs last 15 min of data would be around 11GB of data movement, while in some pipeline runs it would be less than 1GB of data movement. However, pipeline execution time was similar in both cases inspite of the data volume difference resulting in inconsistent copy time.

To move the data with good speed and data size and in the expected duration, we needed a better approach.

Solution:

We divided the pipeline into small batches of time for example data move for the last 1hr which is 60 min can be divided into 4 groups of 15 min each then these 4 batches will run at the same time to achieve the speed once we get this the pipeline will end in expected duration.

When we divided the overall load into time batches the compute load on the pipeline was also divided among 4 batches. Hance faster execution of the pipeline.

 


Azure Storage Account

Storage Account Name: “enprodeventhubtest”: In the azure storage account, we need to have some Folders before starting with the Data factory some folders in files Share, and some containers in the Blob Storage account as follows –

Azure Blob Storage: Containers: [heartbeat]

Azure File Share: File Share: [heartbeatlogs]

Folders: [heartbeat ] Storage Account: Containers /Folder Structure.


Linked Services

Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources.

We must create 2 Linked Services for our AZURE DATA FACTORY pipeline to work.

Below is the screenshot of the Link service in AZURE DATA FACTORY

Azure file Share: filesharetestAzure Blob Storage: blobstoragetest


Datasets

Details


Pipeline: DataMove_BatchTime

Overview of Pipeline:

DataMove_BatchTime: In this pipeline, we have 2 parameters 4 variables 9 activities, and 1 trigger.

This pipeline will run every 1 hour every day with the help of Trigger.

Pipeline Structure

Link service:

Parameters:

Variable:

Activity:

 


Pipeline Screenshot:

Follow the Image and given Table to configure the same pipeline.

Activity, Dependency

No Activities Name Configs Sub-Configs Syntax
1 Set variable Store_Trigger_Hour
Variables: Name: Time_Path
value: @concat(formatDateTime(pipeline().

TriggerTime,’HH’),’:00′)

2 Set variable Batch_Time_Variable Variables: Name: Batch_Time
value: @string(mul(int(pipeline().parameters.

Batch_Size),-1))

3 until Loop_Until_Time_equals _Pipeline_Time Settings: Expression: @less(int(pipeline().parameters.

Pipeline_Time),mul(int(variables(

‘Batch_Time’)),-1))

3.1 Append variable Batch_Array_variable Variables: Name: Batch_Array
value: @variables(‘Batch_Time’)
3.2 Set variable Batch_Time_Increment_variable Variables: Name: Batch_Time_Increment
value: @string(sub(int(variables(‘Batch_Time’)),

int(pipeline().parameters.Batch_Size)))

3.3 Set variable Set_Batch_Time_variable Variables: Name: Batch_Time
value: @variables(‘Batch_Time_Increment’)
4 ForEach Loop_Every_Batch Settings: Items: @variables(‘Batch_Array’)
4.1 Copy Data Copy_To_Destination_Blob General: Retry: 3
Source: Source Dataset: 01_heartbeat_FS_json_source
File Path Type: Wildcard file path
Wildcard Path: heartbeatlogs/@concat(‘heartbeat/’,

formatDateTime(subtractFromTime(

pipeline().TriggerTime,1,’Hour’),

‘yyyy/M/dd/HH’))/*.gz

Filter by last Modified:
Start Time UTC :@addminutes(variables(

‘Time_Path’),int(item()))

End Time UTC :@addminutes(variables(

‘Time_Path’),add(int(pipeline().parameters.

Batch_Size),int(item())))

Recursively YES
Sink: Sink Dataset: 01_heartbeat_blob_json_sink
Copy behaviour Preserve hierarchy
Settings: Data integration unit 32
degree of copy parallelism 48
4.2 Delete Delete_From_Source_FileShare Source: Source Dataset : 01_heartbeat_FS_json_delete_source :Open:Connection:File path:heartbeatlogs/@concat(‘heartbeat/’,

formatDateTime(subtractFromTime(

pipeline().TriggerTime,1,’Hour’),

‘yyyy/M/dd/HH’))/Null

File Path Type: Wildcard file path
Wildcard File name: *.gz
Filter by last Modified:
Start Time UTC :@addminutes(variables(

‘Time_Path’),int(item()))

End Time UTC :@addminutes(variables(‘Time_Path’)

,add(int(pipeline().parameters.Batch_Size),

int(item())))

Recursively YES

Schedule Trigger:

So as per our use case, we will set a Schedule trigger that will run the pipeline every one hour.

1. Open pipeline Add trigger.

 

 

 

 

 

2 Click on + NEW.

 

 

 

 

 

3. Configs same as below image. And click on OK

The post Move and Extract data in batch’s using Azure Data factory first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/extracting-data-using-azure-data-factory/feed/ 0
Azure Machine Learning Integrated with Power BI https://devifitechwebsite.azurewebsites.net/blogs/machine-learning-with-power-bi/ https://devifitechwebsite.azurewebsites.net/blogs/machine-learning-with-power-bi/#respond Thu, 09 Dec 2021 13:45:18 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=14763 Author – Sana Khatoon Data Engineer What you will take away from this blog- Get familiar with Machine Learning Workspace Predict Diabetes Score using Linear Regression Integrate Machine Learning Model in Power BI Desktop Visualize your model using Python Visuals Prerequisites- Requires Azure Subscription for creating Automated ML Workspace Install Python on your system Understanding […]

The post Azure Machine Learning Integrated with Power BI first appeared on IFI Techsolutions.

]]>
Author – Sana Khatoon

Data Engineer

What you will take away from this blog-

  • Get familiar with Machine Learning Workspace
  • Predict Diabetes Score using Linear Regression
  • Integrate Machine Learning Model in Power BI Desktop
  • Visualize your model using Python Visuals

Prerequisites-

  • Requires Azure Subscription for creating Automated ML Workspace
  • Install Python on your system
  • Understanding of Linear Regression algorithm

Let’s first have a quick overview on Linear Regression and then we will deep dive into the process of creating Automated ML model and this model will get integrated into Power BI.

Linear Regression Overview

Linear regression is a linear approximation of a relationship between two or more variables. Regressor model are highly used by data scientist to make prediction over continuous numerical values. Basically, the process of linear regression is mentioned as below-

  • Come up with the dataset as per your choice but it should follow some objective of making predictions
  • Design Machine Learning Model that works on the dataset
  • Make predictions on the dataset (based on Linear regression algorithm

There is dependent variable which is called Y being predicted and independent variable X1, X2, X3……….Xn. Here x is a predictor and Y is function of X variables. The simple equation of linear regression is-

{\displaystyle Y_{i}=f(X_{i},\beta )+e_{i}}

Random Forest Model

Random forest regression is a bagging technique where the parts of the main dataset get distributed among multiple Decision Trees that will predict the best model. And finally based on the root mean square error(RMSE), it will aggregate the best model or choose the best predictive model.

In Random Forest Process, we have some base learner models like M1, M2, M3 .. Mn. These base learner model are called Decision Trees. Each decision tree will randomly pickup the number of rows and columns from the main dataset, the process is called Row Sampling for rows distribution and Feature Sampling for columns distribution. In this way every base learner/decision tree will have D’ dataset. This will form a bootstrap model which will be aggregated according to the bagging process.

This information is well enough to understand working process of our model.

Creating Machine Learning Workspace

The workspace is the top level of resource which you need to build to work in Machine Learning environment. Azure provides the different types of workspace, according to the needs and requirement user will create the workspace.

In our case, we will create Machine Learning Workspace by following the below steps-

  • Go to Azure Portal
  • Search for Machine Learning on search bar
  • Select Machine Learning Option

  • Click on Create to create the workspace

  • Provide the workspace name, rest of the details will be set to default as new
  • Click on blue button review and create

To Launch the workspace, go to the workspace and click Launch Studio.

So, here will get the Machine Learning workspace where we will define the dataset and train the model based on Linear Regression.

Creating Experiments using Automated ML

Automated ML-

  • Click on automated ML option

 

  • Click on New Automated ML run

  • Click on the Create New Option and select Automated ML run

 

  • Next step is to choose the dataset, click on Create datasets and select From open datasets

 

  • Now, search for diabetes in the search box and select sample: Diabetes
  • Click on Next

 

  • Give the name of the dataset and click on Create button

Now, we have successfully created the dataset. Next, step is to configure the Model

  • Select the Sample: Diabetes and click on Next

  • In the next step, we are required to provide some details as below-
      1. Select new experiment
      2. Give the name of the experiment
      3. Select Target Column Y ( Actual value on which the model will make predictions)
      4. Select compute type as compute cluster
      5. Select Azure ML compute cluster compute1 (if not pre built , you need to create new one)
      6. Click on Next

 

  • Now, we need to provide the modelling technique or algorithm on which our model work. In dataset we have Y column which consists of continuous numerical values so we will select Regression here
  • Click on Next

Here by default model will select RMSE(Root Mean Square Error) as primary metric. You can change it by clicking on View additional configuration settings.

  • In the next step, Keep the Auto validation type as Auto
  • Select no test dataset required under Test Data Preview Option
  • Click on Finish

Now, you may relax and see the magic what Automated ML will prepare for you. This is a code free platform where you need not to worry about the calculations and the logic behind the model. But basic understanding of algorithm is required to understand and interpret the results.

Note- It will take approx. 30 min to train the model.

When you create an experiment, Automated ML will create multiple models for you.

  • Based on the normalized root mean squared error, we will select our best model i.e. Random Forest and deploy as a web service

  • Here you need to provide some details like name of the model and compute type
  • Click on Deploy

Note: If we will not deploy the model , it will be not visible to Power BI

Integrate the model into Power BI

Before integrating the model into Power BI, we will make our Power BI engine compatible with Python.

  • In Power BI Desktop, Go to File -> Options and Settings -> Options -> Python Scripting
  • Now, under the option Detected Python Home Directories give the folder location where your Python is installed

  • Install Pandas, Numpy and Matplotlib library using command prompt

Note – To integrate the model into Power BI, first we need to get the same dataset columns which was passed to our model. Here, records of the table can be different but headers will need to be same as per our model because our model is trained on that headers and only knows same column name.

  • In Power BI Desktop our sample data looks like below-

  • Now, go to the Home Tab -> Transform Data , Power Query Editor window will appear
  • In Power Query Editor go to Home Tab -> Azure Machine Learning Option

You will get the list of models which has been previously built. Select the model which you have deployed. You can see the Created date and last modified date of the model. As soon as you click on OK button, your model will get loaded into Power BI with the Predictions. You can check the results by changing some records of the dataset.

 

  • Now, you can see the predicted value as a column in your dataset. Click on Close & Apply Option

  • In the Power BI Desktop under the visualization view, drag the python scripting from visuals
  • In the Fields , select Y and the model value which was loaded into the power query editor
  • Rename the Y as Actual Value and AzureML:DiabetesPrediction as Predicted Value for better understanding of the visual

Below is the small code which you will need to write under the python script editor to plot the above Line chart-


import matplotlib.pyplot as plt

dataset.plot(figsize=(12,6))

plt.show()

The post Azure Machine Learning Integrated with Power BI first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/machine-learning-with-power-bi/feed/ 0
Migrating data from On-Premises MySQL to Azure database for MySQL using DMS https://devifitechwebsite.azurewebsites.net/blogs/on-premises-mysql-to-azure-database-migratoion-using-dms/ https://devifitechwebsite.azurewebsites.net/blogs/on-premises-mysql-to-azure-database-migratoion-using-dms/#respond Fri, 12 Nov 2021 09:37:09 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=14475 Author- Kriti Bhardwaj(Data Engineer) In this documentation you will learn how to migrate On-premises MySQL to Azure database for MySQL using DMS. Here I have taken demo sample database to migrate with minimal downtime. Pre-requisites: Create Azure database for MySQL. Create a virtual machine. Create DMS. Install MySQL workbench. Both Azure database for MySQL and […]

The post Migrating data from On-Premises MySQL to Azure database for MySQL using DMS first appeared on IFI Techsolutions.

]]>
Author- Kriti Bhardwaj(Data Engineer)

In this documentation you will learn how to migrate On-premises MySQL to Azure database for MySQL using DMS. Here I have taken demo sample database to migrate with minimal downtime.

Pre-requisites:
  1. Create Azure database for MySQL.
  2. Create a virtual machine.
  3. Create DMS.
  4. Install MySQL workbench.
  5. Both Azure database for MySQL and on-premises MySQL should be of same version. (i.e., 5.7 or above).
  6. Add client IP address and VM IP address to the Azure Database for MySQL.
  7. Add 443 outbound port and 3306 inbound port on virtual machine.
  8. Configure the DNS name of the VM.
Steps to follow:
  1. Download MySQL workbench on Virtual machine.
  2. Create a database and a table on your local host and fill some entries in the table.
  1. Create Azure database for MySQL using portal.
  2. Connect azure database for MySQL with Workbench.
  3. Go to the following directory on cmd.
C Drive-> program files -> MySQL -> MySQL server 8.0 -> bin
  1. Write the following command on command prompt to give all the privileges to your user.
mysql> CREATE USER 'monty'@'localhost' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'monty'@'localhost'
->     WITH GRANT OPTION;
mysql> CREATE USER 'monty'@'%' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'monty'@'%'
->     WITH GRANT OPTION;
  1. Use mysqldump command to do schema migration.

-h: localhost (You can get this IP from VM)

-u: username

-p: Password

–databases: demo

mysqldump -h [localhost] -u [username] -p[password] --databases [db name] --no-data > [schema file path]
Example:
mysqldump -h localhost -u username -p --databases demo --no-data > C:\migration\sample.sql
  1. To import schema to Azure Database for MySQL target, run the following command:

-h: servername.mysql.database.azure.com

-u: mysqladminuser@servername

-p: Password of your MySQL Server

–databases: demo

mysql -h [servername] -u [username] -p[password] [database]< [schema file path]
Example:
mysql -h mydemoservernm.mysql.database.azure.com -u myadmin@mydemoservernm -p demo < C:\migration\sample.sql
  1. If you get the above collation error, then follow the below step:
  • Click on the above red mark option
  • Select the .sql file created before and open it.
  • Now where ever ‘utf8mb4_unicode_520_ci’ is written, replace it with ‘utf8mb4_unicode_ci’ and save it .
  • Now again run the above command
  1. Now, the schema is migrated to Azure database for MySQL.
  2. Create database migration service instance and then create a migration project on Azure Portal.
  1. Select source as MySQL and destination as Azure database for MySQL.
  1. On the Add Source Details screen, specify the connection details for the source MySQL instance. You have to add details of your on-premises MySQL server. Select Save
  • Source server name: DNS name of your VM
  • Server Port: 3306
  • Username: username
  • Password: Password
  1. On the Target details screen, specify the connection details for the target Azure Database for MySQL server, which is the pre-provisioned instance of Azure Database for MySQL to which the Demo schema was deployed by using mysqldump.
  • Target Server name: servername.mysql.database.azure.com
  • Username: mysqladminuser@servername
  • Password: Password of your MySQL server
  1. Select Save, and then on the Map to target databases screen, map the source and the target database for migration. If the target database contains the same database name as the source database, the Azure Database Migration Service selects the target database by default.
  1. Select Save, on the Migration summary screen, in the Activity name text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified. Select Run migration.
  1. The migration activity window appears, and the Status of the activity is pending. Refresh the page in sometime.
  1. The migration process is complete.
  1. Now check Azure database for MySQL on workbench if the data is migrated or not.
  1. The data is successfully migrated.

The post Migrating data from On-Premises MySQL to Azure database for MySQL using DMS first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/on-premises-mysql-to-azure-database-migratoion-using-dms/feed/ 0
Migration from Azure SQL Database to SQL Managed Instance. https://devifitechwebsite.azurewebsites.net/blogs/migration-from-azure-sql-database-to-sql-managed-instance/ https://devifitechwebsite.azurewebsites.net/blogs/migration-from-azure-sql-database-to-sql-managed-instance/#respond Tue, 11 May 2021 05:27:00 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=11817 Author: Shruti Srivastava, Associate Data Engineer In this blog, We will check how to migrate data from Azure SQL database to Azure SQL Managed Instance. Pre-requisites: Azure Account SSMS Steps: A. Create Azure SQL Database with sample database Click on Create Sql database . Enter subscription name, resource group, Database name, Server name. Select pricing […]

The post Migration from Azure SQL Database to SQL Managed Instance. first appeared on IFI Techsolutions.

]]>
Author: Shruti Srivastava, Associate Data Engineer

In this blog, We will check how to migrate data from Azure SQL database to Azure SQL Managed Instance.

Pre-requisites:
  1. Azure Account
  2. SSMS
Steps: A. Create Azure SQL Database with sample database
  1. Click on Create Sql database .
  2. Enter subscription name, resource group, Database name, Server name.
  1. Select pricing tier : Standard with 10 DTUs and 100MB Data max size
  1. In additional setting choose data source as sample.
  1. Select review and create.
  1. Select create and azure sample database has been created.
B. Create SQL Managed Instance:
  1. Click on create Azure Sql Managed Instance in Azure Portal .
  2. Enter subscription name, resource group, Managed Instance name, Region.
  1. Configure Compute and Storage.
  1. Select Backup Storage redundancy as Locally-redundant backup storage.
  1. Enter Administrator account details like Managed Instance admin login and password.
  1. Create a Virtual network and Click on Public end point enable .
  1. Click on review and create.
  1. Click on Create and Azure sql Managed Instance has been created.
C. Connect Azure Sql Server and  SQL Managed Instance in SSMS. Open SSMS and In Object Explorer Connect to Database Engine Both Azure Sql Server and Managed Instance will get connected in SSMS. D. Create a Database in Azure Sql Managed Instance .
  1. Right click on the databases and select new database and create.
E. Migration Process
  1. As you expand databases of Sql server, you well database shrutitestdatabase. Under tables of this database you will the see tables of sample data.
  1. Right Click on database named shrutitestdatabase, click on Tasks and select Export Data.
  1. A Wizard will open up. Click on Next.
  1. Select Data Source as SQL server Native Client 11.0 and then write Azure Sql Server name. In Authentication, select SQL SERVER Authentication and write Username and Password. In database select the database name click on Next.
  1. Select Destination as SQL server Native Client 11.0 and then write Azure Sql Managed Instance server name. In Authentication, select SQL SERVER Authentication and write Username and Password. In database select the database name click on Next.
  1.  Click on Copy data from from one or more tables or views and click Next.
  1. Select all Tables and Views of Source Server and Click on Next.
  1. Click on Run Immediately and Click Next.
9. Click on Finish. 10. Click on Close.
  1. You will get the tables of Sql Server shrutitestdabase in demodatabase of Managed Instance.

The post Migration from Azure SQL Database to SQL Managed Instance. first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/migration-from-azure-sql-database-to-sql-managed-instance/feed/ 0
Automate Fetching of Email Attachments with Microsoft Flow https://devifitechwebsite.azurewebsites.net/blogs/email-attachments-microsoft-flow/ https://devifitechwebsite.azurewebsites.net/blogs/email-attachments-microsoft-flow/#respond Fri, 16 Apr 2021 04:39:41 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=11390 Author: Ayush Chauhan, Data Engineer In this article, we will see how to automatically save the attachments associated with an Outlook Mail to Azure File Share using Microsoft Flow. This helps to archive the Outlook attachments. To optimize the attachment extraction from email to further optimize the performance of your entire solution you need to: […]

The post Automate Fetching of Email Attachments with Microsoft Flow first appeared on IFI Techsolutions.

]]>
Author: Ayush Chauhan, Data Engineer

In this article, we will see how to automatically save the attachments associated with an Outlook Mail to Azure File Share using Microsoft Flow. This helps to archive the Outlook attachments.

To optimize the attachment extraction from email to further optimize the performance of your entire solution you need to:

  • Access the Microsoft Flow portal:https://flow.microsoft.com/and sign in with your account
  • Inmicrosoft.com, select “My flows” in the top navigation bar, and then select “+ Create from blank” to create a Flow from scratch.

Automate Fetching of Email Attachments with Microsoft Flow

  • On the “Create a flow from blank”, select “Create from blank” to start the process of creating an empty Flow.

Configure the Email Connector

  • On the Flow designer, on the Trigger event enter “Email” and select the option “Email – When a new email arrives”.

Automate Fetching of Email Attachments with Microsoft Flow

We want that all the email received with a particular email id will trigger this flow, so on the Email Trigger configuration:

  • Confirm that “Inbox” is configured in the “Folder” property
  • Specify the filter, if the case is such all emails that have come from a specific email id in “from”.
  • And confirm that the “Include Attachments” property is set to “Yes”.

Automate Fetching of Email Attachments with Microsoft Flow

Be aware that “Has Attachments” is a Boolean attribute that allows you to define if you want to receive only emails with attachments or not. This is the first filtering rule since we are looking for emails with Surveys in attachments. “Include Attachments” property is another Boolean this one allows you to specify if you indeed want to retrieve attachments along with the email.

Set global variables

  • Just to avoid having multiple actions with the same configuration in different conditions branch, which may lead to mismatch configurations, we will be using variables, nevertheless, this is optional.
    • Add the next step by clicking the “+New step” button and then choose the “Add an action” option
  • Automate Fetching of Email Attachments with Microsoft Flow
    • Go to the “Choose an action” window, enter “Apply to each” and select the action, along with entering “attachments” in “Select an output from previous steps”.
  • Automate Fetching of Email Attachments with Microsoft Flow
    • Now within the ‘Apply to each’ action creates another action, where you have to search for Azure File Storage and select “Create File”

Automate Fetching of Email Attachments with Microsoft Flow

Setup the Content

  • Specify the folder in which you want to store the file.
  • On the “File Name” property, select from the list of tokens the “Attachment Name” token from the “When a new email arrives” trigger.
  • On the “File Content” property, select from the list of tokens the “Attachment Content” token from the “When a new email arrives” trigger.

Automate Fetching of Email Attachments with Microsoft Flow

Finally, save the file by clicking ‘save’ at the upper right-hand corner and test it to validate the flow. And once is triggered, it will copy the attachment to an Azure File Storage folder.

Automate Fetching of Email Attachments with Microsoft Flow

Wrap Up

This blog helps to automate certain processes in extracting the attachments in a simple and time-efficient way with Microsoft Flow. Hence, it doesn’t take much to write a heavy code. Hope this helps!

The post Automate Fetching of Email Attachments with Microsoft Flow first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/email-attachments-microsoft-flow/feed/ 0
Visualize Tally Data in PowerBI https://devifitechwebsite.azurewebsites.net/blogs/tally-data-in-powerbi/ https://devifitechwebsite.azurewebsites.net/blogs/tally-data-in-powerbi/#respond Wed, 10 Feb 2021 07:23:30 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=10625 Author: Sneha Valia Tally.ERP 9 Tally is at the forefront of simplifying business management for business through automation.Tally.ERP 9 is awindows–basedEnterprise Resource Planningsoftware. The software handlesAccounting,Inventory Management, Order Management, Tax Management,Payroll, Banking and many such requirements of the business. It supports all day-to-day processes from recording invoices to generating various MIS reports. Tally’s product caters […]

The post Visualize Tally Data in PowerBI first appeared on IFI Techsolutions.

]]>
Author: Sneha Valia

Tally.ERP 9

Tally is at the forefront of simplifying business management for business through automation.Tally.ERP 9 is awindowsbasedEnterprise Resource Planningsoftware. The software handlesAccounting,Inventory Management, Order Management, Tax Management,Payroll, Banking and many such requirements of the business. It supports all day-to-day processes from recording invoices to generating various MIS reports.

Tally’s product caters to millions of users across industries in over 100 countriesin the past three decades since it was founded.

Microsoft Power BI

Microsoft Power BI is a suite of business intelligence (BI), reporting, and data visualization products and services for individuals and teams. Power BI stands out with streamlined publication and distribution capabilities, as well as integration with other Microsoft products and services.

Integration using ODBC Interface

Open Database Connectivity (ODBC) is an interface for accessing data in a heterogeneous environment of relational and non-relational database management systems.Itis an Application Program Interface (API) specification that allows applications to access multiple database systems using Structured Query Language (SQL). ODBC provides maximum interoperability a single application can access many different database systems.

Tally.ERP9 is an ODBC enabled application.The Tally ODBC Driver could also be a strong tool that allows you to connect with live data from Tally, directly from any applications that support ODBC connectivity.Also, you can access Tally data like you would access a database – read, write, and update Tally SalesOrders,PurchaseOrders, Accounts, etc. through a typical ODBC Driver interface.With Open Database Connectivity you can exchange data between TallyERP 9 and any other application dynamically. Using the ODBC Interface,Tally.ERP 9 can make ODBC calls to an external database and retrieve data from them. In such a case Tally acts as a Client to pull Data from disparate Data Sources. This data can be consumed in Tally as per requirement.

Let’s go ahead and see the demonstration ofconnecting Tally Data to Power BI using Tally ODBC Driver.

Step 1: Set the following connection properties to connect to Tally Instance. PressCtrl + Alt +F for Configurations.

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

Step 2:In Client Server Configuration,Goto → Tally ERP 9 is acting as → Both.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

Step 3: Press Enter for rest of the settings and lastly in Accept window select yesto save the settings.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 

Step 4: Open Power BI → Select Get data option → More…

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 

Step 5: Goto Others → Select ODBC → Click on Connect.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 

Step 6: In DataSourceName, Select TallyODBC_9000 → Press OK.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 

Step 7:InNavigator Screen → ODBC dsn → Click ODBC(dsn = Tally_9000) → Select the dataset → Build visuals.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 

For direct extraction of data from Tally, ODBC method is the most optimum method. But a challenge to this is the limitations of Power BI. Power BI can only Load and Transform 60 thousand rows which is not an appropriate feature for large datasets. And hence Azure Data Factory is used to overcome the limit of records that can be processed.

Let’s see how …

 

Azure Data Factory

Azure Data Factory is a platform that solves data situations. It is the cloud-based ETL and information combination service that permits you to make information driven work processes for coordinating information development and changing information at scale. Utilizing Azure Data Factory, you can make and schedule information driven work processes (called as pipelines) that can ingest information from divergent information stores.

The Tally Pipeline:

The Tally Pipeline consists of a ‘ForEach’Activity, whichincludesonecopy activity. The parameter defined for the pipeline consists of thesource table name, thedestination table name and the script.The copy activitypicks up these values from the parameter and then completes the data copy process from Tally ERP 9 software to SQL database.A trigger makes sure that thepipelineruns on a pre-defined schedule.

Before you start with this exercise, you will need to have the following prerequisites:

  1. You need to have an activeAzure Subscription.
  2. Azure Data FactoryYou need to have a data factory created in azure, integration runtime beforehand and two linked services.
  3. In Overview, Click ‘Author & Monitor to begin creating datasets and pipelines.

Steps to create the Resources –

Step 1: Goto Author → Datasets → Click on three dots at the right of the Pipeline actions → New Dataset.

Step 2: Select ODBC → Click on Continue → Select the linked service, integration runtime and table → Finally test the connection.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 

Step 3: Repeat Step 1 & Select ‘SQL Database (to sink dataset to SQLserver)’ for this step of creating dataset.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 


Step 4:
Goto Pipelines → Click on three dots at the right → New Pipeline.

  • Under Activities → Goto Move & Transform → Drag Copy Data → Select Copy Data.
  • In below Goto Source, mention Source dataset. Source dataset will be the dataset that we created in Step 1 & 2.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 


Step 5:
Goto Sink option and mention the sink dataset.

 

Visualize Tally Data in PowerBI using Tally ODBC Driver and ADF

 


Step 6:
Once you have mentioned source and sink → Click Validate and Validate Copy runtime options.

  • After validation, click Debug.
  • Debug feature runs the Pipeline and in output tab we can see the status of the pipeline.
  • If Status shows ‘succeeded’ , your pipeline has successfully implemented the copy activity.

 

Conclusion – Accesing Tally ODBC Driver and storing data in SQLserver with the help of Azure Data Factory (for large datasets of Tally),overcomes the limitations of Power BI. Also you get live connectivity to your Tally data by connecting SQLDatabase in PowerBI, allowing you to build real-time charts, graphs & more.

The post Visualize Tally Data in PowerBI first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/tally-data-in-powerbi/feed/ 0
Azure Databricks https://devifitechwebsite.azurewebsites.net/blogs/azure-databricks/ https://devifitechwebsite.azurewebsites.net/blogs/azure-databricks/#respond Mon, 09 Mar 2020 16:11:53 +0000 https://ifi-prod-poc-2.azurewebsites.net/?p=5252 What is Azure DataBricks ? Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform . It is a fast, easy, and collaborative Apache Spark-based analytics service Here are some of the benefits for Data Engineers and Data Scientists for using Azure Databricks: 1. Optimized Environment Azure Databricks is […]

The post Azure Databricks first appeared on IFI Techsolutions.

]]>

What is Azure DataBricks ?

Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform . It is a fast, easy, and collaborative Apache Spark-based analytics service

What is Azure DataBricks

Here are some of the benefits for Data Engineers and Data Scientists for using Azure Databricks:

1. Optimized Environment
Azure Databricks is optimized from the ground up for performance and cost-efficiency in the cloud .

The Databricks Runtime adds several key capabilities to Apache Spark workloads that can increase performance and reduce costs by as much as 10-100x when running on Azure, including:

a)High-speed connectors to Azure storage services, such as Azure Blob Store and Azure Data Lake, developed together with the Microsoft teams behind these services.

b)Auto-scaling and auto-termination for Spark clusters to automatically minimize costs.

c)Performance optimizations including caching, indexing, and advanced query optimization, which can improve performance by as much as 10-100x over traditional Apache Spark deployments in cloud or on-premise environments

2. Seamless Collaboration
Notebooks on Databricks are live and shared, with real-time collaboration, so that everyone in your organization can work with your data. Dashboards enable business users to call an existing job with new parameters. Also, Databricks integrates closely with PowerBI for interactive visualization.

3. Easy to Use
Azure Databricks comes packaged with interactive notebooks that let you connect to common data sources, run machine learning algorithms, and learn the basics of Apache Spark to get started quickly. It also features an integrated debugging environment to let you analyze the progress of your Spark jobs from within interactive notebooks, and powerful tools to analyze past jobs. Finally, other common analytics libraries, such as the Python and R data science stacks, are preinstalled so that you can use them with Spark to derive insights.

Total Azure Integration

  • Diversity of VM types: Customers can use all existing VMs including F-series for machine learning scenarios, M-series for massive memory scenarios, D-series for general purpose, etc.
  • Security and Privacy: In Azure, ownership and control of data is with the customer. We have built Azure Databricks to adhere to these standards. We aim for Azure Databricks to provide all the compliance certifications that the rest of Azure adheres to.
  • Flexibility in network topology: Customers have a diversity of network infrastructure needs. Azure Databricks supports deployments in customer VNETs, which can control which sources and sinks can be accessed and how they are accessed.
  • Azure Storage and Azure Data Lake integration: These storage services are exposed to Databricks users via DBFS to provide caching and optimized analysis over existing data.
  • Azure Power BI: Users can connect Power BI directly to their Databricks clusters using JDBC in order to query data interactively at massive scale using familiar tools.
  • Azure Active Directory provide controls of access to resources and is already in use in most enterprises. Azure Databricks workspaces deploy in customer subscriptions, so naturally AAD can be used to control access to sources, results, and jobs.
  • Azure SQL Data Warehouse, Azure SQL DB, and Azure CosmosDB: Azure Databricks easily and efficiently uploads results into these services for further analysis and real-time serving, making it simple to build end-to-end data architectures on Azure

For the first time, a leading cloud provider and leading analytics system provider have partnered to build a cloud analytics platform optimized from the ground up – from Azure’s storage and network infrastructure all the way to Databricks’s runtime for Apache Spark. We believe that Azure Databricks will greatly simplify building enterprise-grade production data applications

The post Azure Databricks first appeared on IFI Techsolutions.

]]>
https://devifitechwebsite.azurewebsites.net/blogs/azure-databricks/feed/ 0