IDQ INTERVIEW QUESTIONS

In the latest times, business organizations run and improve the growth structure based on the data. They are utilizing some powerful tools to slice and dice, play with the data to reach out to potential customers for consistently driving business. Informatica IDQ is one of the latest web-based tools that is extensively combining customers and business organizations by allowing them to perform the data related functionalities effectively. In this article, you can go through the set of Informatica IDQ interview questions most frequently asked in the interview panel. And these listed questions will help you to crack the interview as the topmost industry experts curate these at HKR training. Let us have a quick review of the IDQ interview questions.

1. What do you know about IDQ? Give a brief idea about the core components in IDQ?

Ans:  IDQ refers to Informatica Data Quality which is a tool used by the data analysts to deploy the data quality for all the different kinds of workloads. IDQ integrates with Informatica Power center to ensure that data quality is delivered. IDQ is specifically designed to perform functionalities like data matching, cleansing, and profiling.

IDQ includes two core components that are listed below:

Data quality workbench: The data quality workbench is the component that performs data analysis and creates the rules accordingly for data enrichment and standardization.

Data quality server: The Data Quality Server is used to run the programs, file sharing, and accomplish the plans in a networked environment. It also supports networking through different domains and communicates with workbench over TCP/IP.

2. Give a brief idea about connected lookup and unconnected lookup?

Ans: The following are the differences between connected lookup and unconnected lookup.

Connected Lookup: The connected lookup is a lookup that is capable of participating in all the workflows and receives the inputs from other transformations directly. Connected lookup is used in both static and dynamic cache, supporting user-defined values.

Unconnected Lookup: Unconnected lookup is capable of receiving the inputs based on the last known position. It returns only one column value. Unconnected lookup is not capable of supporting user-defined values.

Wish to make a career in the world of IDQ? Start with HKR'S IDQ Online Training !

3. What do you understand by the terms: predefined event and user-defined event?

Ans: A predefined event states that the event is predefined. It is likely to be a file watch event. A predefined event waits for a particular file to arrive at a specific location.
A user-defined event is a collection or flow of the tasks to be performed in the workflow process. The user-defined events are created and utilized based on the requirement.

4. Define the term transformation in IDQ? List out the different types of transformations that are available in Informatica?

Ans: The term transformation represents the nature of the activity. It is called the repository object that has the capability of generating, modifying, and passing the data.

There are different types of transformations that are available in Informatica. They are listed below:

Aggregator transformation: Aggregator transformation is an active transformation used to perform calculations on the data. The aggregate transformation stores the aggregate data in the aggregate cache.
Filter transformation: A filter transformation is a connected and active transformation used to filter the rows that do not meet the condition in the mapping.
Lookup transformation: Lookup transformation is an active, connected, or unconnected transformation used to look up the data in the relational table. 
Rank transformation: Rank transformation is a connected and active transformation used to perform the selection of the top or bottom rank.
Expression transformation: The expression transformation allows the calculation of values in a single row. It also involves in performing non-aggregate calculations
Joiner transformation: Joiner transformation is a connected and active transformation used to join two sources of data. The source of the data can be either from one origin or two different origins.
Normalizer transformation: Normalizer transformation is an active transformation that helps in the conversion of a single row into multiple rows or multiple rows into a single row.
Router transformation: Router transformation is a connected and active transformation that has the capability of capturing the data that do not meet the condition.

5. Give brief differences between static lookup and dynamic lookup cache?

Ans: A static cache refers to the cache that cannot be modified and will remain the same during the session. Informatica cache by default is a static cache.
A dynamic cache refers to the cache that allows modification during the session by performing functions like insertion or updating of the records. 

6. What do you know about the update strategy? Briefly describe the multiple options available in the update strategy?

Ans: Update transformation strategy allows the processing of the source data in a row format, row by row. Update strategy transformation is used when there is a need for insertion or updation of some logic in the row. 

There are multiple options available in the update strategy, which are listed below:

  • DD_INSERT: Used for insertion with an equivalent numeric value as 0
  • DD-UPDATE: Used for updating with an equivalent numeric value like 1
  • DD-DELETE: Used for deletion with an equivalent numeric value as 2
  • DD-REJECT: Used for rejection with an equivalent numeric value as 3

7. List out the different types of dimensions available in Informatica IDQ?

Ans: There are three different types of dimensions available in Informatica IDQ. They are listed below:

  • Junk dimension
  • Degenerative dimension
  • Conformed dimension

8. List out the different types of loadings available in Informatica?

Ans: Informatica includes two different types of loadings: Normal loading and bulk loading.

Normal Loading: This type of loading is the process that allows the loading of records in a sequential format: one by one. Normal loading takes a lot of time to the target source.

Bulk Loading: Bulk loading is the process that allows the loading of multiple records at once. The time taken for loading the data using the Bulk loading process is very less when compared to the normal loading.

9. Give a brief explanation of where can we find the throughput option in Informatica?

Ans: Yes, there is an option to determine the through the option in Informatica. The throughput option is available in the Informatica workflow monitor. You will need to navigate to the workflow monitor, search for the session button, right-click on it followed by run properties. You will be able to view the throughput option under source/target statistics.

IDQ Training

  • Master Your Craft
  • Lifetime LMS & Faculty Access
  • 24/7 online expert support
  • Real-world & Project Based Learning

 

10. Explain the process of updating a record in the table without using Update strategy?

Ans: There is a different way where the record can be updated in the table without the usage of Update strategy. To perform this functionality, a key has to be defined in the Informatica level. The next step is to connect the key and update the fields that you would like to update in the mapping target. In the session level, the target property should be set up as “Update”, and the Update checkbox has to be checked.

11. What do you understand by an Address Doctor in Informatica IDQ?

Ans: Address Doctor in IDQ is a transformation primarily used to compare the input address data with the reference data. It is designed to determine the accuracy of the input address and relatively fixes the errors in those addresses. It is the validator transformation available to rectify and make the necessary changes to the data as per the business requirements. 

12. What do you know about the Surrogate Key?

Ans: A surrogate key is the substitute/replacement of the primary key within the database. It is a unique identifier used to represent each row in the table. It is either represented in the form of an integer or a digit. The surrogate key has more significance because it makes the updation easier though the primary key is changed. 

13. Briefly explain the differences between data integration service and power center integration service?

Ans: The power center integration service, referred to as an application service specifically designed to run the sessions and workflows related to data management. The data center integration service referred to as an application service specifically designed to perform data integration tasks for different tools like Analyst tool, developer tool, and external clients. The Developer tool and the analyst tool will work together and send the data integration requests to the data integration service to perform some functionalities. These functionalities could either include SQL data services, running and previewing of data profiles, mappings, etc.

14. Briefly explain the differences between active transformation and passive transformation in Informatica IDQ? List out some examples for each type of transformation?

Ans:

Active transformation: Active transformation in Informatica refers to the process that allows performing the changes to the number of rows that have gone through the mapping. The following are some of the active transformations:

  • Sorter transformation
  • Joiner transformation
  • Router transformation
  • Rank transformation
  • Filter transformation

Passive transformation: Passive transformation in Informatica refers to the process that does not allow performing changes to the number of rows that have gone through the mapping. The following are some of the passive transformations:

  • Sequence generator transformation
  • Output transformation
  • Input transformation
  • Expression transformation
  • Lookup transformation
  • External procedure transformation

15. What do you understand by the term Command Task?

Ans: The command task is specifically used to perform the shell commands. It could be anywhere in the workflow. Command task can be called as pre-session or post-session command as it can run the commands accordingly within the workflow process.

Subscribe to our youtube channel to get new updates..!

 

16. Can you explain how we can improve the performance of aggregator transformation in Informatica?

Ans: The performance of aggregator transformation can be improved if the records are sorted before performing the aggregator or passing the aggregator to the records. The sorted input option is checked, which is available under aggregator properties. The sorting should happen on those sets of columns that are used by the group by option.

17. Give a brief idea about sessions and batches in Informatica?

Ans:

Sessions: A session in Informatica refers to the set of instructions defined to transfer the data. These instructions are guided towards the power center server by letting it know when to perform the data transfer from the defined source to targets. In short, the conversion of data from source to data using defined instructions is called a session.

Batches: A batch is the set of tasks to be performed. The tasks include sessions, email, write, commands, etc. The execution of batches is used to combine different session executions. This can be done either in a serial or parallel manner. The pmcmd command is used to run the batch.

18. Do you think that there is an option to export an object from the IDQ to Power center tool? If yes, specify the procedure?

Ans: Yes, there is an option to export an object from IDQ to the Power center tool. The following are the steps to be followed to export an object.

  • Establish connectivity to the repository service
  • Find and locate the project folder in the developer tool
  • Expansion of the Mapping tab
  • Choose the mapping that needs to be exported
  • Expansion of Informatica folder
  • Click on object export file
  • Find and locate the project folder and select the option called Mapping/mapplets
  • Select browse and click on the location where you want to export it.

19. List out the different tools available in the workflow manager? Specify at least one alternative tool used for scheduling processes other than workflow manager?

Ans: The following are the different tools available in the workflow manager listed below:

  • Task developer
  • Task designer
  • Workflow designer

Control M is one of the alternative tools that is specifically used for scheduling the processes other than the workflow manager.

20. Define the term parameter file? Give a brief idea about the different values that are available in the parameter file?

Ans: A file that is created in a Wordpad or text editor is called as a parameter file. The following are the different data values that can be defined in a parameter file.

  • Mapping variables
  • Mapping parameters
  • Session parameters

21. Briefly explain the difference between Model repository service and PowerCenter repository service?

Ans:

Powercenter repository service:

The PowerCenter repository service includes folder-based security and is utilized by both power center application services and clients.

Model repository service:

The model repository service includes project-based security which is the same as that of other application services like Analyst service, Developer tool, Data integration service, analyst tool.

22. Explain about live data and staged data?

Ans: Live data and staged data are two options available during profile creation for data drilling. 

Live Data: For any type of profile definition, Live data is the default option set up under profile creation. Live data will help in accessing the row data on the source.

Staged Data: All the data represented in the rows is staged in the profiling warehouse.

Some of the sources that use mainframes and other BigData, drilling of data is not available on Live data. This is because of its higher expenses. Hence, the default mode of drill-down that uses such applications will be staged data.

23. Give a brief idea about Join Analysis profiling, multiple profiles, and compare profiles in Informatica IDQ?

Ans:

Join Analysis profiling: Join Analysis profiling is the process that is used to analyze the degree of potential joins between two data columns. The results of the Join analysis will be represented in the form of a Venn diagram as a percentage or numerical value.

Multiple profiles: The multiple profile option helps in creating profiles on various tables at once. This option is available in the profile wizard window that allows prefix and suffix with some default values. The steps to be followed to create multiple profiles is: Select the objects ->right-click->Select the desired profile->Select multiple profiles.

Compare profile: Compare profile in Informatica IDQ provides the flexibility to compare the output from two transformations. It is beneficial as it is capable of providing the before and after pictures of the data. It is also compatible to be used with any source object, mapplet, a transformation that has outputs.

24. What do you understand by the term Worklet? Give the different available options in it/

Ans: Worklet refers to the group of workflow tasks that are accumulated in the set in a classified and organized format. The following are the different options available.

  • Decision
  • Command
  • Mail
  • Event wait
  • Link
  • Session
  • Assignment
  • Timer
  • Control

IDQ Training

Weekday / Weekend Batches

 

25. List the components that are required to be installed during the installation of Informatica Powercenter Installation?

Ans: The following are the different components that are required to be installed during the Informatica power center installation.

  • Power center domain
  • Integration services
  • Power center clients
  • Power center domain
  • Administrative console for Power center

26. Explain what is meant by the target load order in Informatica IDQ?

Ans: Target load order in Informatica refers to the list of activities where the priorities can be defined. The data loading happens in the Informatica server based on the priority. If there are a set of source qualifiers linked to multiple targets, then the order can be defined, or the order can be dictated so that the server will load the data accordingly to the targets.

If you have any doubts on IDQ, then get them clarified from IDQ Industry experts on our IDQ community!

27. What do you mean by slowly changing dimensions? Give a brief idea about the different types of slowly changing dimensions available in Informatica?

Ans: Slowing changing dimensions are those dimensions that will be changing in a short time. They are called SCD in short. There are three different types of slowly changing dimensions in Informatica. They are:

  • Slowly changing dimension-Type 1: In this type of dimension, it can store only the current records.
  • Slowly changing dimensions-Type 2: In this type of dimension, it includes both current and historical records
  • Slowing changing dimensions-Type3: In this type of dimension, it includes both the current record and also the previous record

28. Explain about a Data transformation Manager?

Ans: The data transformation manager process is created once the load manager performs the validations for the sessions. The data transformation manager is the secondary process that is associated with the sessions in Informatica.

29. Define the functionalities performed by STOP and ABORT options in the workflow monitor?

Ans:

STOP command: The STOP command helps in informing the integration service to stop reading the data from the source. It performs other functionalities like writing, committing, and processing the data.

ABORT command: The ABORT command can terminate the session only after a time frame of 60 seconds. This happens when the integration service is unable to complete the processing and committing of data within the given time.

30. What do you know about Enterprise Data Warehouse?

Ans: An enterprise data warehouse is termed as a central and corporate repository that helps in storing and managing the historical data of an enterprise. The information would come from different sources like CRM’s, recordings, ERP’s, etc.

Submit an interview question

Categories

Request for more information

Mudassir
Mudassir
DevOps ERP and IAM tools
Mudaasir is a programming developer for hkr trainings. He has a well knowledge of today’s technology and I’ve loved technology my entire life. And also been lucky enough to work for the programmer including science and technology. Big thanks to everyone who has followed me on LinkedIn and twitter.

WhatsApp
To Top