Data Modeling Interview questions
Last updated on Jan 09, 2024
Data modeling is the process of creating a model for the data to store in a database. It is a conceptual representation of data objects, the association between different data objects, and the rules.
In this article, you can go through the set of Data Modeling interview questions most frequently asked in the interview panel. This will help you crack the interview as the topmost industry experts curate these at HKR trainings.
Most Frequently Asked Data Modeling Interview Questions
- What is data modeling?
- What are the characteristics of the physical data model?
- What is the difference between star flake and snow flake schema?
- What is a level of Granularity of a fact table?
- What is a surrogate key? Explain it with an example.
- What do you mean by Column?
- Define a Composite Primary Key Constraint.
- Define OLTP Data Modeling
- What is meant by Normalization?
- Explain Subtype and Supertype entities
- Define Artificial primary key & its uses.
- Name the different types of Metadata in a data model
Let us have a quick review of the Data Modeling interview questions.
What is data modeling?
Ans: A data model is a conceptual representation of business requirements (logical data model) or database objects (physical) required for a database and is very powerful in expressing and communicating the business requirements and database objects. The approach by which data models are created is called data modeling.
What does the data model contain?
Ans:
Logical Data Model:
Entity, Attributes, Super Type, Sub Type, Primary Key, Alternate Key, Inversion Key Entry, Rule, Relationship, Definition, business rule, etc
Physical Data Model:
Table, Column, Primary key Constraint, Unique Constraint or Unique Index, Non Unique Index, Check Constraint, Default Value, Foreign Key, comment etc.
Explain various types of data models.
Ans: There are mainly three different types of data models:
- Conceptual: Conceptual data model defines what should the system contain. This model is typically created by business stakeholders and data architects. The purpose is to organize, scope, and define business concepts and rules.
- Logical: Defines how the system should be implemented regardless of the DBMS. This model is typically created by data architects and business analysts. The purpose is to develop a technical map of rules and data structures.
- Physical: This data model describes how the system will be implemented using a specific DBMS system. This model is typically created by DBA and developers. The purpose is the actual implementation of the database.
What are the important types of relationships in a data model?
Ans: The types of relationship in a data model are stated as follows.
- Identifying.
- Non-Identifying Relationship.
- Self-Recursive relationship.
What is the difference between the logical data model and physical data model?
Ans:
Logical data model:
- A logical data model can design the requirement of business logically.
- It is responsible for the actual implementation of data which is stored in the database.
- It contains an entity, primary key attributes, Inversion keys, alternate key, rule, business relation, definition, etc.
Physical data model:
- A physical data model provides information about the target database source and its properties.
- A physical data model helps you to create a new database model from existing and apply the referential integrity constraint.
- A physical data model contains a table, key constraints, unique key, columns, foreign key, indexes, default values, etc.
Become a Data Modeling Certified professional by learning this HKR Dimensional Data Modeling Training !
What are the characteristics of a logical data model?
Ans: Characteristics of the logical data model are:
- Describes data needs for a single project but could integrate with other logical data models based on the scope of the project.
- Designed and developed independently from the DBMS.
- Data attributes will have data types with exact precisions and length.
- Normalization processes to the model, which are generally applied typically till 3NF.
What are the characteristics of the physical data model?
Ans: Characteristics of the physical data model are:
- The physical data model describes the data needed for a single project or application. It may be integrated with other physical data models based on project scope.
- The data model contains relationships between tables that address cardinality and nullability of the relationships.
- Developed for a specific version of a DBMS, location, data storage, or technology to be used in the project.
- Columns should have exact data types, lengths assigned, and default values.
- Primary and foreign keys, views, indexes, access profiles, and authorizations, etc. are defined.
What are the advantages of using data modeling?
Ans: The advantages of using data modeling in data warehousing are:
- It helps you to manage business data by normalizing it and defining its attributes.
- Data modeling integrates the data of various systems to reduce data redundancy.
- It enables to create efficient database design.
- Data modeling helps the organization department to function as a team.
- It facilitates access to data with ease.
Explain about the phases in the Data Modeling Development cycle.
Ans:
- Gathering Business Requirements: Data Modelers have to interact with business analysts to get the functional requirements and with end-users to find out the reporting needs.
- Conceptual Data Modeling(CDM): This data model includes all major entities, relationships and it will not contain much detail about attributes and is often used in the INITIAL PLANNING PHASE.
- Logical Data Modeling(LDM): This is the actual implementation of a conceptual model in a logical data model. An LDM is the version of the model that represents all of the business requirements of an organization.
- Physical Data Modeling(PDM): This is a complete model that includes all required tables, columns, relationship, database properties for the physical implementation of the database.
- Database: DBAs instruct the data modeling tool to create SQL code from the physical data model. Then the SQL code is executed on the server to create databases.
. What is Data Mart?
Ans: A data mart is the most straightforward set of data warehousing and is used to focus on one functional area of any given business. Data marts are a subset of data warehouses oriented to a specific line of business or functional area of an organization such as marketing, finance and sales. Data enters data marts by an assortment of transactional systems, other data warehouses, or even external sources.
. Explain data warehousing in detail.
Ans: Data warehousing is a process for collecting and managing data from varied sources. It provides meaningful business enterprise insights. Data warehousing is typically used to connect and analyse data from heterogeneous sources. It is the core of the BI system, which is built for data analysis and reporting.
. What is the difference between view and materialized view?
Ans:
View:
- A view is created by combining data from different tables. Hence, a view doesn’t have data of itself.
- When a view is created, the data is not stored in the database. The data is created when a query is fired on the view.
Materialized view:
- Materialized view usually used in data warehousing has data. This data helps in decision making, performing calculations etc.
- Data of a materialized view is stored. The data stored by calculating it beforehand using queries.
. Explain the Two Different Design Schemas.
Ans: The two design schema is called Star schema and Snowflake schema.
- The Star schema has a fact table centered with multiple dimension tables surrounding it.
- A Snowflake schema is similar, except that the level of normalization is higher, which results in the schema looking like a snowflake.
. What is the difference between star flake and snow flake schema?
Ans:
Star Schema:
In a star schema, you just enter your desired facts and all the primary keys of your dimensional tables in the Fact table. The fact tables primarily are the union of its all dimension table key. In a star schema, dimensional tables are usually not in BCNF form.
Snow Flake:
It’s almost similar to a star schema but in this, our dimension tables are in 3rd NF, so more dimensions tables. These dimension tables are linked by primary, foreign key relation.
. What are the critical relationship types found in a Data Model? Describe them.
Ans: The main relationship types are:
- Identifying: A relationship line normally connects parent and child tables. But if a child table’s reference column is part of the table’s primary key, the tables are connected by a thick line, signifying an identifying relationship.
- Non-identifying: If a child table’s reference column is NOT a part of the table’s primary key, the tables are connected by a dotted line, signifying a no-identifying relationship.
- Self-recursive: A recursive relationship is a standalone column in a table connected to the primary key in the same table.
. What is the difference between a data mart and a data warehouse?
Ans:
Data mart:
- It focuses on a single subject area of business.
- It is used to make tactical decisions for business growth.
- Data mart follows the bottom-up model.
- Data source comes from one data source.
Data warehouse:
- It focuses on multiple areas of business.
- It helps business owners to take a strategic decision.
- Data warehouse follows a top-down model.
- Data source comes from more than one heterogeneous data source.
. What does a data modeller use normalization for?
Ans: The purposes of normalization are:
- Remove useless or redundant data.
- Reduce data complexity.
- Ensure relationships between the tables in addition to the data residing in the tables.
- Ensure data dependencies and that the data is stored logically.
. What is OLTP data modeling?
Ans: OLTP acronym stands for ONLINE TRANSACTIONAL PROCESSING. The approach by which data models are constructed for transactions is called OLTP data modeling.
Example: all online transactions, bank transactions, trading transactions.
. Explain various types of fact tables.
Ans: There are three types of fact tables:
- Additive: It is a measure that is added to any dimension.
- Non-additive: It is a measure that can't be added to any dimension.
- Semi-additive: It is a measure that can be added to a few dimensions.
. What is the purpose of the Factless Fact Table?
Ans: Fact less tables simply contain keys which refer to the dimension tables. Hence, they don’t really have facts or any information but are more commonly used for tracking some information about an event.
Example: To find the number of leaves taken by an employee in a month.
. What is a level of Granularity of a fact table?
Ans: A fact table is usually designed at a low level of Granularity. This means finding the lowest level of information that can store in a fact table.
Example: Employee performance is a very high level of granularity. Employee_performance_daily, employee_perfomance_weekly can be considered lower levels of granularity.
. What’s the difference between forward and reverse engineering, in the context of Data Models?
Ans:
- Forward engineering is a process where Data Definition Language (DDL) scripts are generated from the data model itself. DDL scripts can be used to create databases.
- Reverse Engineering creates data models from a database or scripts. Some data modeling tools have options that connect with the database, allowing the user to engineer a database into a data model.
. What are the two types of data modelling techniques? Describe them.
Ans: Two types of data modelling techniques are:
Entity-Relationship (E-R) Model:
ER model or entity-relationship model is a methodology of data modeling that normalizes the data by reducing redundancy.
UML (Unified Modelling Language):
It is a general-purpose, database development, modelling language in the field of software engineering. The main intention is to provide a generalized way to visualize system design.
. What is dimensional modeling?
Ans: Dimensional model consists of dimension and fact tables. Fact tables store different transactional measurements and the foreign keys from dimension tables that qualify the data. The goal of the Dimensional model is not to achieve a high degree of normalization but to facilitate easy and faster data retrieval.
. Differentiate between Relational Data modeling and Dimensional Data modeling.
Ans:
Relational Data modeling:
- Data is stored in RDBMS.
- Tables are units of storage.
- Data is normalized and used for OLTP. Optimized for OLTP Processing.
- Several tables and chains of relationships among them.
- Volatile and time-variant.
- Details level of transactional data.
Dimensional Data modeling:
- Data is stored in RDBMS or Multidimensional databases.
- Cubes are units of storage.
- Data is denormalized and used in data warehouses and data mart. Optimized for OLAP.
- Few tables and fact tables are connected to dimensional tables.
- Non volatile and time-invariant.
- Summary of bulky transactional data.
. Differentiate between ER Modeling and Dimensional Modeling.
Ans:
- The entity-relationship model is a method used to represent the logical flow of entities/objects graphically that in turn create a database. It has both a logical and physical model. This model is good for reporting and point queries.
- A Dimensional model is a method in which the data is stored in two types of tables namely facts table and dimension table. It has only a physical model. It is good for ad hoc query analysis
Data Modelling Training Online
- Master Your Craft
- Lifetime LMS & Faculty Access
- 24/7 online expert support
- Real-world & Project Based Learning
. What is the importance of using keys?
Ans:
- Keys help you to identify any row of data in a table. In a real-world application, a table could contain thousands of records.
- Keys ensure that you can uniquely identify a table record despite these challenges.
- Allows you to establish a relationship between and identify the relation between tables.
- Help you to enforce identity and integrity in the relationship.
. What is a surrogate key? Explain it with an example.
Ans: Data warehouses commonly use a surrogate key t0 uniquely identify an entity. A surrogate is not generated by the user but by the system. A primary difference between a primary key and surrogate key in a few databases is that the primary key uniquely identifies a record while a surrogate key uniquely identifies an entity.
Example: An employee may be recruited before the year 2010 while another employee with the same name may be recruited after the year 2010. Here, the primary key will uniquely identify the record while the surrogate key will be generated by the system (say a serial number) since the surrogate key is not derived from the data.
. What are the examples of the OLTP system?
Ans: Example of OLTP system are:
- Sending a text message.
- Add a book to the shopping cart.
- Online airline ticket booking.
- Online banking.
- Order entry.
. What is an enterprise data model?
Ans: Enterprise data model comprises all entities required by an enterprise. The development of a common consistent view and understanding of data elements and their relationships across the enterprise is referred to as Enterprise Data Modeling. For better understanding purpose, these data models are split up into subject areas.
. What do you mean by Data Modeling?
Ans. Data modeling is developing a model for data to store within a database securely. Here, the data model is a concept of presenting business needs or physical objects that are required for a database. Further, it helps to communicate and present various business needs and objects in the database The approach by which data models are created is called data modeling.
. What are the different types of Data Models?
Ans. The following are the types of Data Models:-
- Logical Data Model:- This data model is developed by business analysts, data architects, etc. that defines the application of the system to any DBMS. The data model is designed individually from DBMS. It further includes Attributes, Sub Type, Primary Key, Entity, Alternate Key, Relationship, etc
- Physical Data Model:- This data model is built by Database Administrators that defines how to apply the system to a particular DBMS. It includes Primary Key Constraint, Table, Unique Index, Column, Default Value, Foreign Key, Comment, etc.
- Conceptual Data Model:- This data model is created by data architects that describe what a system should include. The purpose of creating a conceptual data model is to create entities, their relationships, and their features.
. Define Physical Data Modeling & Physical Data Model.
Ans. The physical data model consists of all the tables, columns, database properties, and relationships for the physical execution of databases. Moreover, some important variables of a physical data model include database execution, indexing strategy, and physical storage.
The table is the major component here that includes rows and columns. Further, physical data modeling is a viewpoint by which these data models are built.
. What is meant by a table in data modeling?
Ans. A table is a collection of rows & columns where data is stored. Each column in a table includes a datatype where the data exists in a tabular format.
. What do you mean by Column?
Ans. A column has another name field where data is vertically arranged that includes related information.
. What is meant by Row?
Ans. The row has another name tuple or record where data is horizontally aligned.
. Distinguish between a Physical & Logical Data Model.
Ans.
A Logical Data Model is responsible for the actual application of data stored in a database and it can design the needs of a business logically. Further, the logical data model consists of an alternate key, primary key features, entity, business rule, relation, etc.
A Physical Data Model is useful to build a new database model from the existing and it provides data about the target database source. Table, unique key, columns, indexes, foreign key, etc. are included in this data model.
. Define ER (Entity Relationship) diagram or ERD in Data Modeling.
Ans. In a data model, the tables look like square or rectangular boxes that include various features, and these tables are connected by lines. The term ER diagram ERD is a visual representation of tables or entities and their relationships.
. What is meant by a Surrogate key?
Ans. The Surrogate key is a numerical element that is applied as a primary key. This key is a substitute for natural keys or an artificial key.
The data modelers create a surrogate key when there is no availability of a primary key or composite primary keys. Hence, this key is useful for SQL queries, uniquely identifying a record and performing well.
. What do you mean by a Foreign key?
Ans. A foreign key is useful to link a parent and a child table on the column which includes a group of elements. The value of a foreign key column within the child table always refers to primary key values available in the parent table.
. Define Primary Key Constraint.
Ans. The primary key constraint is forced on the column data that avoids null and duplicate values in the column. The equation of Primary Key = Unique + Not Null. The following are the primary key examples:- bank account number, bank routing number, etc.
. Define a Composite Primary Key Constraint.
Ans. When more than one table column is used as a primary key, then the same is called a composite primary key constraint.
. Define a Composite Foreign Key constraint in Data modeling.
Ans. In data modeling, when there is a group of columns exists in a foreign key, the same is called a composite foreign key constraint.
Subscribe to our YouTube channel to get new updates..!
. Define Cardinality.
Ans. This is a numerical element of a relation between two tables or entities. There are many types of cardinalities such as one-to-one, one to many, and many to many. In a database, high cardinality refers to a column that can have more unique or possible values stored and vice versa.
. Define the Enterprise data model.
Ans. The Enterprise data model includes all the tables or entities that an enterprise requires. Enterprise Data Modeling refers to the evolution of a common compatible view and understanding of data attributes and their associates throughout the organization.
. What do you mean by Relational data modeling?
Ans. Relational data modeling is a visual presentation of objects within a relational database.
. Explain the various types of Relationships in a Data Model.
Ans. There are three different types of relationships in a Data Model. Such as identifying relationships, non-identifying relationships, and self-recursive relationships. Let us know these relationships in detail.
Generally, in a data model, 2 types of tables are present: parent tables and child tables. Both of these tables are linked through a relationship line.
- In case, a specified column within the child table is a primary key in the child table. Then the relationship which is built by thick lines by linking these two tables is called an identifying relationship. In other words, this type of relationship is used to locate a relation between the strong and weak entities.
- A non-identifying relationship is a relation between two tables where a sample of the child table is non-identified through its relation with the parent table.
- If an independent column in a table or entity is connected to the primary key of the same table, then it is called a recursive relationship.
. Define OLTP Data Modeling.
Ans. The term OLTP refers to ONLINE TRANSACTIONAL PROCESSING that uses traditional DBMS. The approach using which data models are built for transactions is OLTP data modeling. Following are examples of OLTP: online transactions, trading transactions, order entry, bank transactions, etc.
. Difference between Conceptual Data Model and Conceptual Data Modeling.
Ans. The conceptual data model consists of all the important entities or tables and their relationships. But it doesn’t include much detailed information about elements and is frequently used in the early planning stage. Further, the data modelers build this data model and forward it to the functional team for their review.
. What do mean by a Constraint and its different types?
Ans. The constraints are the rules that are enforced on the data columns available in the table. There are various types of constraints in data modeling such as primary key, not null, foreign key, composite foreign key, unique, check constraint, etc.
. What do you mean by an index?
Ans. To recover data much faster, an index is enforced on a column or set of columns.
. Define a Unique Constraint.
Ans. In the data model, a unique constraint is enforced on the data column to keep away identical values. But here it will include NULL values.
. Define sequence.
Ans. To produce a unique number, the sequence is used which is a database object in a data model.
. What is the number of null values inserted in a unique constraint column?
Ans. There is no limit here, you can insert as many null values in a unique constraint column. This is because one null value is not identical to another.
. Define Check constraints in a data model.
Ans. A Check constraint in a data model is useful to verify the range of values within a column.
. What is meant by Normalization?
Ans. The term ‘Normalization’ refers to a design of relational databases where the tables are arranged in a way that helps to remove data redundancy. It also helps to overcome normal data modeling problems and reduces data dependency. Further, it splits the larger tables into many smaller tables
. Name the different types of normalization.
Ans. The following are the different types of normalization available in Data Modeling- First normal form, Second normal form, Third normal form, Boyce-Codd fourth, and Fifth normal forms.
. Explain the data model Repository.
Ans. A data model repository is a storage that consists of data models & their applicable data. Such as entity definition, columns, data types, the definition of an attribute, etc. This data is accessible to data modelers and the whole team.
. Define De-Normalization.
Ans. A De-Normalization in data modeling is a process where redundancy is added to the data and it is also useful to build a data warehouse. Also, it helps to recover data much faster from the database.
. Define Forward Engineering in a data model.
Ans. This is a process using which DDL (Data Definition Language) scripts are produced from the data model. Further, data modeling tools have choices to build DDL scripts by associating with other databases. However, we can develop databases using these scripts.
. What is meant by Metadata in data modeling?
Ans. Metadata is the data about several objects within the data model. Further, the Data Modeling tools provide various options to build reports by verifying several points. It gives you an option to build Metadata which is either a logical data model or a physical model. Moreover, it presents the actual data that is stored within a database.
. Define Reverse Engineering in Data modeling.
Ans. The process of Reverse Engineering is used to develop data models from databases/scripts. Further, the tools of data modeling have various choices to link to the database. These tools are helpful to perform reverse engineering to convert a database into a data model.
. Explain Subtype and Supertype entities.
Ans. An entity in a data model can be divided into several sub-entities and collected based on some features. Here, each sub-entity will have attributes pertinent to that entity and share some common attributes that are different from other subgroups. Hence, these are the subtype entities in a data model.
A supertype entity is an entity that has a relationship with many subtype entities. In this entity, the attributes that are common to them are moved to a super or higher-level entity.
. Mention the example of Preemptive and Non-preemptive scheduling in a data model.
Ans. We can go with the following non-computer example. Here, we can think about a system where people use some types of resources and compete for those resources. Examples of preemptive scheduling may include that if there is an emergency or an ambulance traveling on the single-lane road then the traffic on that road gives a path to those vehicles that are in need. Popular examples of Non-preemptive scheduling include First Come First Serve, and the Shortest Job First (SJF).
. Distinguish between SnowFlake & Star Schema?
Ans.
Star Schema: In this schema, a user just enters his desired facts including all the primary keys of the dimensional tables in the Fact table. Thus, it includes both dimension and fact tables.
Snowflake Schema: This schema is the same as the star schema but this schema includes all three types of tables such as dimension, sub-dimension, and fact tables. Further, these extent tables are connected through the relations of the primary and foreign keys.
. Define Data Sparsity and its effects on aggregation
Ans. Data sparsity is used for how much data we have for a specific entity/dimension of the model. It affects aggregation based on the deep union of members of the sparse dimension.
In case the dimensions carry insufficient information, then the data may need more space for the aggregations and it results in a large-size database.
. Differentiate between the Hashed file stage and Sequential file stage in relation to DataStage Server.
Ans. In the Datastage server, we use a hashed file to save data on the hash algorithms along with the hash key value. On the other hand, a sequential file doesn’t contain any key value for saving data.
A sequential file stage is useful to read or write data from one or more flat files. Hashed file stage is useful to extract data within a DataStage job. Further, each of the hashed file stages can include many inputs or outputs.
. What do you mean by ERD?
Ans. Data models are tools useful to narrate the data needs and speculations in the system from a top-down standpoint. They also make it possible for designing databases later on in the SDLC.
The following are three basic elements available in ER models:
- Entities/tables are the “things” about which we look for any information.
- Attributes or elements are the data we fetch about the entities or tables.
- Relationships are the layout that is required to bring out information from different entities/tables.
. Define the Third normal form of Normalization.
Ans. If an entity is in the second normal form, then it is considered the entity is in the third normal form. Further, all of its elements are not transitively dependent on the primary key in the data model. The term transitive dependence refers to the descriptor key elements not only depending on the entire primary key. But they also depend on other descriptor key elements that successively depend on the primary key in a data model.
. What is the reason the Recursive relationships are bad? How can we solve them?
Ans. Recursive relationships are a more complex concept than the relationships we have seen earlier like one-to-one, one-to-many, etc. This type of relationship takes place when there is a relation between an entity and itself.
Due to their complex nature, we may need a moderately more complex system to map them to a schema and present them in a style sheet.
. Define Artificial primary key & its uses.
Ans. If we use a name as the primary key in a data model, it will breach the principle of stability. The social security number might be a valid option, but an employee working in a foreign might not have this number. In this condition, a derived or artificial primary key is best suitable.
An artificial key is considered a derived key that we build. Further, a natural key is that which already exists in the database.
. Define a Second Normal form (2NF).
Ans. In case all the attributes/elements depend only on the primary key, then an entity is considered to be in the 2nf. In declarative terms, each column within a table must be functionally dependent on the entire primary key of the same table. This dependence specifies that there is a connection between two different column values.
. What is meant by Granularity?
Ans. The term ‘Granularity’ means the volume of information that a table carries. It can be measured in types - High or Low. A low granularity contains only low-level or low-volume information whereas high granularity data include transaction-level data.
. All databases must be in the Third normal form- True or False?
Ans. Usually, all enterprise databases are normalized to the Third Normal Form to pull out redundancy and efficient access. We can also develop a database without
Define Conformed Dimension.
normalization. Therefore, it's not necessary that all databases should be in the Third Normal Form.
. Define Conformed Dimension.
Ans. A dimension is considered confirmed if the same is attached to two fact tables. Examples of conformed dimensions would include calculating profits, revenue, price, margin, cost, etc.
. What are Dimensions in data?
Ans. The dimensions in the data are a group of unique values useful for locating and classifying data from the data storage or warehouse.
. What do you mean by Data Mart?
Ans. A data mart is a kind of data warehouse that is designed & developed for a specific unit or a section within an enterprise such as Sales, HR, Marketing, Finance, etc.
. Name the different types of Metadata in a data model.
Ans. There are different types of Metadata in a data model.
- Business Metadata
- Descriptive Metadata
- Technical Metadata
- Structural Metadata
- Statistical Metadata
- Reference Metadata
. What are the possible or common mistakes that we can face in data modeling?
Ans. Lack of reason, Creating huge data models, Developing unwanted surrogate keys, an unnecessary denormalization process, etc.
. Name the different Dimension types in the data modeling.
Ans. Junk Dimension, Conformed Dimension, Role-playing Dimension, Degenerated Dimension, Slowly Changing Dimension, Swappable Dimensions, etc.
. Define OLAP.
Ans. The term OLAP refers to On-Line Analytical Processing, a type of technology that allows managers, analysts, etc. to get data insights in a secure, consistent, and fast manner. Moreover, the OLAP technology is useful in many smart solutions such as budgeting, analysis, planning, simulation models, etc.
. Explain PDAP.
Ans. PDAP refers to Praedico Data Platform which is a data cube useful for storing data as a summary. The data in the PDAP platform is stored in a way that the users can easily report it.
. Define Discrete data and Continuous data.
Ans. The term ‘Discrete data’ is a kind of fixed or limited data such as gender, and telephone numbers. On the other hand, Continuous data changes in a constant and ordered manner. Such as Age.
Upcoming Data Modelling Training Online Online classes
Batch starts on 12th Dec 2024 |
|
||
Batch starts on 16th Dec 2024 |
|
||
Batch starts on 20th Dec 2024 |
|