Data Science is a combination of algorithms, tools, and machine learning techniques which helps you to find common hidden patterns from the given raw data.

In this article, you can go through the set of Data Science interview questions most frequently asked in the interview panel. This will help you crack the interview as the topmost industry experts curate these at HKR training.

Let us have a quick review of the Data Science interview questions.

Ans: Machine Learning is a process of exploring and constructing the algorithms which gain the learning ability to make the predictions on data. It is very closely related to computational statistics. It implements complex models and algorithms for predictions known as predictive analytics.

Ans:

- Supervised learning is the machine learning task of inferring a function from labelled training data.
- The training data consist of a set of training examples.
- Algorithms in supervised learning include:

- Support Vector Machines.
- Regression.
- Naive Bayes.
- Decision Trees.
- K-nearest Neighbor Algorithm.
- Neural Networks.

Example: If you built a flower classifier, the labels will be “this is a lotus, this is a rose and this is a sunflower”, based on showing the classifier examples of lotus, rose and sunflower.

Ans:

- Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses.
- Algorithms in unsupervised learning include:

- Clustering.
- Anomaly Detection.
- Neural Networks.
- Latent Variable Models.

Example: In the same example, a flower clustering will categorize as “flower with various colours”, “flower which is fresh” and “flowers which are dry”.

Ans: Data Science is a combination of various tools, algorithms, and machine learning principles with the goal to discover hidden patterns from the raw data.

**Supervised Learning:**

- Input data is labelled.
- Uses a training dataset.
- It is used for prediction.
- It enables classification and regression.

**Unsupervised Learning:**

- Input data is unlabelled.
- Uses the input dataset.
- It is used for analysis.
- It enables Classification, Density Estimation, & Dimension Reduction.

Ans:

**Univariate analysis:**

These are descriptive statistical analysis techniques which can be differentiated based on the number of variables involved at a given point of time.

Example: The pie charts of sales based on territory involve only one variable and can be referred to as univariate analysis.

**Bivariate analysis:**

If the analysis attempts to understand the difference between two variables at a time as in a scatterplot, then it is referred to as bivariate analysis.

Example: Analysing the volume of sale and spending can be considered as an example of bivariate analysis.

**Multivariate analysis:**

Analysis that deals with the study of more than two variables to understand the effect of variables on the responses is referred to as multivariate analysis.

Ans:

- Data can be distributed in different ways with a bias to the left or to the right or it can all be jumbled up.
- There are also chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell-shaped curve.
- Properties of Normal Distribution is as follows:

- Unimodal data represented only one mode.
- Symmetrical data represented on both left and right halves which are mirror images.
- Bell-shaped: It is the curve with maximum height (mode) at the mean.
- Mean, Mode, and Median are all located in the centre.
- Asymptotic.

Ans: The most common tasks in machine learning is to fit a model to a set of training data such that reliable predictions can be made on general untrained data.

**Over-fitting:**

- In this, a statistical model describes random error or noise instead of the underlying relationship.
- It occurs when a model is excessively complex, such as having too many parameters relative to the number of observations.
- A model that has been overfitted, has low predictive performance, as it overreacts to minor fluctuations in the training data.

**Under-fitting:**

- Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data.

Example: when fitting a linear model to non-linear data. - Such a model too would have low predictive performance.

Ans:

Outlier values can be identified by using univariate or any other means of the graphical analysis method. If there are only a few numbers of outlier values then they can be assessed individually but for a large number of outliers, the values can be substituted with either the 99th or the 1st percentile values.

All extreme values are not outlier values. The most common ways to treat outlier values are.

- For changing the value and bringing it within a range.
- For simply removing the value.

Ans: The data analytics project involves the following steps.

- Understanding the Business problem.
- Exploring the data and studying it carefully.
- Preparing the data for modelling by finding missing values and transforming variables.
- Start running the model and analyze the Big data result.
- Validate the model with a new data set.
- Implement the model and track the result to analyze the performance of the model for a specific period.

- Master Your Craft
- Lifetime LMS & Faculty Access
- 24/7 online expert support
- Real-world & Project Based Learning

Ans:

**Machine Learning:**

It is a field of computer science that provides the learning ability to computers without explicitly programming it. There are three categories in machine learning.

- Supervised machine learning.
- Unsupervised machine learning.
- Reinforcement learning.

**Deep Learning:**

It is a subfield of machine learning which is concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

Ans:

- Dimensionality reduction is the process of conveying similar information by converting a dataset with huge dimensions into data with fewer dimensions.
- The advantage of dimensionality reduction is it helps in compressing the data and reduces the storage space, computational time and redundant features.

Ans: While performing a hypothesis test in statistics, a p-value helps in determining the strength of your results. A p-value is a number between 0 and 1. Based on the value it will denote the strength of the results. The claim which is on trial is called the Null Hypothesis.

**p-value typically ≤ 0.05**

This indicates strong evidence against the null hypothesis; so you can reject the null hypothesis.**p-value typically > 0.05**

This indicates weak evidence against the null hypothesis, so you can accept the null hypothesis.**p-value at cutoff 0.05**

This is considered to be marginal, meaning it could go either way.

Ans: The following are the steps that need to be implemented for making a decision tree.

- Consider the entire data set as input.
- Look for a split that maximizes the separation of the classes. A split is a test that divides the data into two sets.
- Apply the split to the input data for diving the data.
- Re-apply steps one and two to the divided data.
- Stop when you meet the stopping criteria.
- This step is called pruning. Clean up the tree if you have explored too far by making the splits.

Ans: The correlation and covariance are the two mathematical concepts used in statistics. Both concepts establish the relationship and measure the dependency between two random variables.

**Correlation:**

- It is the best technique used for measuring and estimating the quantitative relationship between the two variables.
- Correlation measures how strongly two variables are related.

**Covariance:**

- In covariance two items vary together and it’s a measure that indicates the extent to which two random variables change in cycle.
- It explains the systematic relation between a pair of random variables, wherein changes in one variable reciprocal by a corresponding change in another variable.

Ans:

**Validation Set:**

It is a set which can be considered as a part of the training set as it is used for parameter selection and for avoiding the overfitting of the model being built.

**Test Set:**

It is the set that is used for testing or evaluating the performance of a trained machine learning model. A training set will fit the parameters such as weights and a test set is used to assess the performance of the model, which means evaluating the predictive power and generalization.

Ans:

- It is a model validation technique applied for evaluating how the outcomes of statistical analysis will generalize to an independent dataset.
- It is mainly applied in the backgrounds where the objective is forecast and one needs to estimate how accurately a model will accomplish in practice.
- The goal of cross-validation is to term a data set to test the model in the training phase i.e. validation data set so as to limit problems of overfitting and to get an insight on how the model will generalize to an independent data set.

Ans:

**Linear Regression:**

- It is a statistical technique in which the score of some variable “Y” is predicted on the basis of the score of a second variable “X”.
- The “X” variable is referred to as the predictor variable while the “Y” variable is known as the criterion variable.

**Logistic Regression:**

- It is a statistical technique applied for predicting the binary outcome from a linear combination of predictor variables.
- Logistic Regression is also known as the logit model.

Ans: The different types of Deep Learning Framework includes the following:

- Caffe.
- Keras.
- TensorFlow.
- Pytorch.
- Chainer.
- Microsoft Cognitive Toolkit.

Ans:

**Machine Learning:**

- It gives computers with unlimited ability where many things can be done without prior programming. It includes supervised, unsupervised, and reinforcement machine learning processes.
- It includes Deep Learning as one of its components.

**Deep Learning:**

- It gives computers the ability to learn without being explicitly programmed.
- It is a subcomponent of machine learning that is concerned with algorithms that are inspired by the structure and functions of the human brains called the Artificial Neural Networks.

Ans: There are two different kinds of Ensemble learning.

- Bagging: It is a technique which implements simple learners on one small population and takes mean for estimation purposes.
**Boosting**: It is a technique which adjusts the weight of the observation and thereby classifies the population in different sets before the outcome prediction is made.

Ans: The various machine learning libraries include the following benefits.

- Numpy: It is used for scientific computation.
- Statsmodels: It is used for time-series analysis.
- Pandas: It is used for tubular data analysis.
- Scikit learns: It is used for data modelling and pre-processing.
- Tensorflow: It is used for the deep learning process.
- Regular Expressions: It is used for text processing.
- Pytorch: It is used for the deep learning process.
- NLTK: It is used for text processing.

Ans: A random forest is built upon a number of decision trees. The data is split into different packages and a decision tree is constructed in each of the different groups of data, the random forest brings all those trees together.

Steps to build a random forest model:

- Randomly select “k” features from a total of “m” features where k << m.
- Among the “k” features, calculate the node “D” using the best split point.
- Split the node into daughter nodes using the best split.
- Repeat steps two and three until leaf nodes are finalized.
- Build a forest by repeating steps one to four for “n” times to create an “n” number of trees.

Ans: Data cleaning can help in the analysis because:

- Cleaning the data from multiple sources helps in transforming the data into a format by which data analysts or data scientists are able to work with it.
- Data Cleaning helps in increasing the accuracy of the model in machine learning.
- It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
- It approximately consumes 80% of the time just only for cleaning the data making it a crucial part of the analysis task.

Ans: The K nearest neighbor algorithm is used for computing the nearest neighbor and if it doesn't have a value, it simply computes the nearest neighbor based on all the other features.

While dealing with K-means clustering or linear regression, you have to do that in your pre-processing, otherwise, they'll crash. Decision trees also have the same problem, although there is some variance.

Ans: There are two methods for initializing the weights where you can either initialize the weights to zero or assign them randomly.

**Initializing all weights to 0:**This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.**Initializing all weights randomly:**Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

Ans:

- Epoch: It represents one iteration over the entire dataset i.e. everything is put into the training model.
- Batch: It refers to when we cannot pass the entire dataset into the neural network all at once, so the dataset is divided into several batches.
- Iteration: If we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations i.e. 10,000 divided by 50.

Ans: CNN stands for Convolutional Neural Network that contains four layers.

- Convolutional Layer: This layer performs a convolutional operation and creates several smaller picture windows to go over the data.
- ReLU Layer: It brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map.
- Pooling Layer: The pooling is a down-sampling operation that reduces the dimensionality of the feature map.
- Fully Connected Layer: This layer recognizes and classifies the objects in the image.

Ans: The missing data values are handled in the following ways:

- If the data set is large, you can just simply remove the rows with missing data values. It is a very quick process where you can use the rest of the data to predict the values.
- For smaller data sets, you can substitute missing values with the mean or average of the rest of the data using the pandas data frame in python. There are different ways to do so, such as “df.mean()”, “df.fillna(mean)”.

Ans:

**Point Estimation:**

- It gives a particular value as an estimate of a population parameter.
- Method of Moments and Maximum Likelihood estimator methods are used to derive Point Estimators for population parameters.

**Confidence Interval:**

- It gives a range of values which is likely to contain the population parameter.
- The confidence interval is generally preferred which tells how likely this interval is to contain the population parameter.
- This likeliness or probability is called a Confidence Level or Confidence coefficient and is represented by 1-alpha, where alpha is the level of significance.

Ans: The following are the variable selections which can be used.

- Remove the correlated variables before selecting important variables.
- Apply linear regression and select variables which depend on that p values.
- Apply Backward, Forward Selection, and Stepwise Selection.
- Apply Xgboost, Random Forest, and plot variable importance chart.
- Measure information gain for the given set of features and select top “n” features accordingly.

**Batch starts on 18th Jun 2021, Fast Track batch****Batch starts on 22nd Jun 2021, Weekday batch****Batch starts on 26th Jun 2021, Weekend batch**

- SSIS Interview Questions
- MSBI Interview Questions
- Alteryx Interview Questions
- SSRS Interview Questions
- SSAS Interview Questions
- Looker Interview Questions
- Domo Interview Questions
- Pentaho Interview Questions
- Tableau Interview Questions
- Qlikview Interview Questions
- Cognos Interview Questions
- spotfire Interview Questions
- Power BI Interview Questions
- SAS Interview Questions
- Qlik Sense Interview Questions

- HTML Interview Questions
- Sailpoint Interview Questions
- Maven Interview Questions
- Jmeter Interview questions
- Sitecore Interview Questions
- Documentum Interview Questions
- IDQ Interview Questions
- Workday Interview Questions
- SolarWinds Interview Questions
- Apttus CLM Interview Questions
- WebMethods Interview Questions
- AutoCAD Interview Questions
- Excel Interview Questions
- Quickbooks Interview Questions
- OSHA Interview Questions
- Facets Interview Questions
- Oracle Fusion HCM Interview Questions
- Oracle Financials Interview Questions
- Oracle Fusion SCM Interview Questions
- Peoplesoft HRMS Interview Questions
- Oracle Fusion Financial Interview Questions
- PeopleSoft Admin Interview Questions
- PeopleSoft FSCM interview questions
- Ab Initio Interview Questions
- Data Modeling Interview questions
- PeopleSoft HRMS functional interview questions
- Service desk Interview Questions
- Apigee Interview Questions
- Hyperion Interview Questions
- Drools Interview Questions
- Confluence Interview Questions
- Oracle Weblogic Interview Questions
- Citrix netScaler Interview Questions
- cobit Interview Questions
- SCCM Interview Questions
- IoT interview questions
- Dell Boomi Interview Questions

- .Net Interview Questions
- Kotlin Interview Questions
- LabVIEW Interview Questions
- WPF Interview Questions
- Sharepoint interview questions
- Core Java Interview Questions
- MVC Interview Questions
- React Native Interview Questions
- VB.Net Interview Questions
- Android Interview Questions
- IOS Interview Questions