Thursday, September 4, 2008

cognos interview questions

aqq.JPGText Box:


Cognos Modeling

Metadata Modeling

Datasource à Source of Information for Reporting and Analysis

When we define a Datasource , we basically define the communication path with

the source of data

Metadata Model à Business oriented mapping of the information

à Business Oriented mapping of the under laying Database structures

Project à Highest Level Container for Metadata Information (Model)

A project can point to Multiple Datasources


Model – 1. Physical Model 2. Logical/Business Model

The Physical Model Represents the Data Source

Tabels, Views,Joins, SQL Query,Stored Procedures,

SAP BW à Infor Objects, Infor Cubes, Info Sources

Logic/Business Model represents the business View of the Physical Model


Its Container – When we work with Multiple Data Sources, A Names Space is used for

Resolving Naming Ambiguity.


Namespace -- USA

-- EMP



Namespace -- India

-- EMP


Namespace is used for Establishing Security

Model – O1,O2,O3

Namespace 1 – O1,O2

Namespace 2 – O1,O2,O3

Cognos Framework Manager

Cognos Framework Manager is tool used for creating and Managing the Metadata Models

Starting Cognos Framework Manager

StartàPrograms à Cognos 8 àFramework Manager

Cognos Framework Manager Home Page Opens

Creating a New Project

File à New Project (to Create a New Project)

New Project Window Opens

Query Subject – Business Entity of interest.

- SQL Query (When working with Data Sources)

-- Stored Procedure

Query Item – Business Attribute

-- A Query item represents the data

-- SQL Query they are the objects in select clause

The aim of data warehousing

Data warehousing technology comprises a set of new concepts and tools which support theknowledge worker (executive, manager, analyst) with information material for decisionmaking. The fundamental reason for building a data warehouse is to improve the quality ofinformation in the organization. The key issue is the provision of access to a company-wideview of data whenever it resides. Data coming from internal and external sources, existingin a variety of forms from traditional structural data to unstructured data like text files ormultimedia is cleaned and integrated into a single repository. A data warehouse (DWH) isthe consistent store of this data which is made available to end users in a way they canunderstand and use in a business context.

The need for data warehousing originated in the mid-to-late 1980s with the fundamen-tal recognition that information systems must be distinguished into operational and infor-mational systems [5]. Operational systems support the day-to-day conduct of the business,and are optimized for fast response time of predefined transactions, with a focus on updatetransactions. Operational data is a current and real-time representation of the businessstate. In contrast, informational systems are used to manage and control the business. They

support the analysis of data for decision making about how the enterprise will operate nowand in the future.They are designed mainly for ad hoc, complex and mostly read-only que-ries over data obtained from a variety of sources. Informational data is historical, i.e., it rep-resents a stable view of the business over a period of time.Limitations of current technology to bring together information from many disparate

systems hinder the development of informational systems. Data warehousing technologyaims at providing a solution for these problems.

The main characteristics of data warehouse data

Data in the DWH is integrated from various, heterogeneous operational systems (like data-base systems, at files, etc.) and further external data sources (like demographic and statis-tical databases, WWW, etc.). Before the integration, structural and semantic differenceshave to be reconciled, i.e., data have to be “homogenized” according to a uniform data model.Furthermore, data values from operational systems have to be cleaned in order to get correct data into the data warehouse.

The need to access historical data (i.e., histories of warehouse data over a prolonged period of time) is one of the primary incentives for adopting the data warehouse approach.

Historical data are necessary for business trend analysis which can be expressed in terms of understanding the differences between several views of the real-time data (e.g., profitability

at the end of each month). Maintaining historical data means that periodical snapshots of the corresponding operational data are propagated and stored in the warehouse with-out overriding previous warehouse states. However, the potential volume of historical data and the associated storage costs must always be considered in relation to their potential business benefits. Furthermore, warehouse data is mostly non-volatile, i.e., access to the DWH is typically read-oriented. Modifications of the warehouse data takes place only when modifications of the source data are propagated into the warehouse.Finally, a data warehouse contains usually additional data, not explicitly stored in the operational sources, but derived through some process from operational data (called also de-rived data). For example, operational sales data could be stored in several aggregation lev-els (weekly, monthly, quarterly sales) in the warehouse.

Data warehouse systems

A data warehouse system (DWS) comprises the data warehouse and all components used for building, accessing and maintaining the DWH (illustrated in Figure 1). The center of a data warehouse system is the data warehouse itself. The data import and preparation compo- nent is responsible for data acquisition. It includes all programs, applications and legacysystems interfaces that are responsible for extracting data from operational sources, preparing and loading it into the warehouse. The access component includes all different

applications (OLAP or data mining applications) that make use of the information stored in the warehouse.

Additionally, a metadata management component (not shown in Figure 1) is responsible for the management, definition and access of all different types of metadata. In general, metadata is defined as “data about data” or “data describing the meaning of data”. In datawarehousing, there are various types of metadata, e.g., information about the operational sources, the structure and semantics of the DWH data, the tasks performed during the con-struction, the maintenance and access of a DWH, etc. The need for metadata is well known.

Statements like “A data warehouse without adequate metadata is like a filing cabinet

stuffed with papers, but without any folders or labels” characterize the situation. Thus, the quality of metadata and the resulting quality of information gained using a data warehouse solution are tightly linked.

Implementing a concrete DWS is a complex task comprising two major phases. In the DWS configuration phase, a conceptual view of the warehouse is first specified according to user requirements (data warehouse design). Then, the involved data sources and the way data will be extracted and loaded into the warehouse (data acquisition) is determined. Finally, decisions about persistent storage of the warehouse using database technology and the various ways data will be accessed during analysis are made. After the initial load (the first load of the DWH according to the DWH configuration),during the DWS operation phase, warehouse data must be regularly refreshed, i.e., modifications of operational data since the last DWH refreshment must be propagated into the warehouse such that data stored in the DWH re ect the state of the underlying operational systems. Besides DWH refreshment, DWS operation includes further tasks like archivingand purging of DWH data or DWH monitoring.

Data warehouse systems

Data warehouse design

Data warehouse design methods consider the read-oriented character of warehouse data and enable the efficient query processing over huge amounts of data. A special type of rela- tional database schemas, called star schema, is often used to model the multiple dimensions of warehouse data (in contrast to the two-dimensional representation of normal relational schemas). In this case, the database consists of a central fact table and several dimension tables. The fact table contains tuples that represent business facts (measures) to be analyzed, e.g., sales or shipments. Each fact table tuple references multiple dimensional tabletuples each one representing a dimension of interest like products, customers, time, region or salesperson. Dimensions usually have associated with them hierarchies that specify aggregation levels and hence granularity of viewing data (e.g., day-> month -> quarter -> year is a hierarchy on the time dimension [1]). Since dimension tables are not normalized, joining the fact table with the dimension tables provides different views (dimensions) of thewarehouse data in an efficient way. A variant of the star schema, called snow ake schema,

commonly used to explicitly represent the dimensional hierarchies by normalizing the is commonly used to explicity represent the dimentional hierarchies by normalizing the dimension tables .A more natural way to consider the multidimentionality of warehouse dat is provided by the multidimensional data model. Thereby, the data cube is the basic underlying modeling construct. Special operations like pivoting (rotate the cube), slicing-dicing (select a subset of the cube), roll-up and drill-down (increasing and decreasing the level of aggregation) have been proposed in this context. For the implementation of multidimensional databases, there

are two main approaches. In the first approach, extended relational DBMSs, called relational OLAP (ROLAP) servers, use a relational database to implement the multidimensional model and operations. ROLAP servers provide SQL extensions and translate data cube operations to relational queries. In the second approach, multidimensional OLAP (MOLAP)servers store multidimensional data in non-relational specialized storage structures. These systems usually precompute the results of complex operations (during storage structure building) in order to increase performance.

Data acquisition

For the first task of data acquisition, the data extraction, standard interfaces (e.g., ODBC,EDA/SQL) or gateways are often used in commercial extraction tools and proprietary extraction scripts. Although often underestimated, data extraction is one of the most timeconsuming tasks of data warehouse development, specially when older legacy systems must be integrated. Usually, data extracted from operational systems contains lots of errors, and must be first transformed and cleaned before loading it into the data warehouse. Data values from operational systems can be incorrect, inconsistent, unreadable or incomplete. Furthermore,

different formats and representations may be used in the various operational systems. Par- ticularly, for the integration of external data, data cleaning is an essential task in order to get correct and qualitative data into the data warehouse, and includes the following tasks:

convert data to the common, internal warehouse format from a variety of external representations

identify and eliminate duplicates and irrelevant data

transform and enrich data to correct values (e.g., by checking the membership of an attribute in a list)

reconcile differences between multiple sources, due to the use of homonyms (same name for different things), synonyms (different names for same things) or different units of measurement. After cleaning, data that comes from different sources and will be stored in the same warehouse table must be merged and possibly set into a common level of detail. Further-

more, time-related information (update/extraction/load date) is usually added to the ware- house data to allow the construction of histories. As mentioned above, one of the main characteristics of a data warehouse is the creation and storage of new base data (compared to the contents of operational systems). Thus, beyond extracting and integrating existing operational data, derived and aggregated data must be calculated using appropriate func- tions or rules. Finally, before or during loading data into the warehouse, further tasks like

filtering, sorting, partitioning and indexing are often required [14]. Populating the target warehouse is then performed using a DBMS’s bulk data loader or an application with embedded SQL.

Data storage and access

The special nature of warehouse data and access necessitates adjusted mechanisms for data storage, query processing and transaction management. Complex queries and operations involving large volumes of data require special access methods, storage structures and

query processing techniques. For example, bitmap indices and various forms of join indices (e.g., Starjoin [10], parallel join [9]) can be used to significantly reduce access time. Further more, since access to warehouse data is mostly read-oriented, complex concurrency control mechanisms and transaction management must be adapted [14]. Access to the data warehouse can also be speeded up by settling subsets of it in form of data marts. A data mart is a selected part of the data warehouse which supports specific de-

cision support application requirements of a company’s department or geographical region. It usually contains simple replicates of warehouse partitions or data that has been further summarized or derived from base warehouse data. Instead of running ad hoc queries against a huge data warehouse, data marts allow the efficient execution of predicted queries over a significantly smaller database.

Virtual data warehouses

The proposal of virtual data warehouses is considered as a way to rapidly implement a datawarehouse without the need to store and maintain multiple copies of the source data. Virtual data warehouses often provide a starting point for organizations to learn what end-users are really looking for. End-users have the possibility to directly access real-time source data using advanced networking capabilities tools. The drawbacks of this approach compared to the classical data warehouse approach illustrated in Figure 1 are:

data quality and consistency is not guaranteed since no prior data preparation (rec-onciliation) takes place,

historical data is usually not available,

end-user access time is usually unpredictable depending on the availability of operational sources, network load (which is high in this approach), query complexity and translations between different database formats.


Today, a plethora of tools, particularly for specific tasks of a DWS like data acquisition, access and management is available in the market. For the implementation of a complete DWS, a set of tools must be integrated to form a concrete warehousing solution. The ulti-mate integration goal is to avoid interface problems. The trend is towards “open”-solutions (supported e.g., by IBM Visual Warehouse [8], HP Open Warehouse [7] or Prism Solutions [11]) which give the opportunity to combine several tools in one DWS. For example, the HP OpenWarehouse is a framework for designing data warehouses based on HP- and third-

party hardware and software components. HP-customers can choose from solutions in areas such data extraction and transformation, relational databases, data access and reporting, OLAP, web-browsers applications and data mining. A further recent market trend is the adoption of data marts as a way to use and experiment with data warehouse technology in particular departments (e.g., marketing). Linking

data warehouse to the Internet (as additionally data source or access interface) gains more attention because it allows companies to extend the scope of warehouse to external information. Until now, the research community attempts to solve particular problems, mostly using well-known concepts and research results from other research fields (like materialized views, index selection, data partitioning) [2, 14]. The most prominent research project, the WHIPS project at the University of Stanford, investigates a wide spectrum of data ware-

housing problems based on techniques of materialized views [13]. In Switzerland, the “Kompetenzzentrum Data Warehousing Strategie” (CC DWS) at the University of St. Gallen (HSG) focuses, together with a number of companies, at the development of a process model for the successful introduction of data warehousing in big companies. Our work in the context of the SIRIUS project focuses on the investigation of techniques for the incremental refresh [6]. In the SMART project (a cooperation with Rentenanstalt/Swiss Life), we investigate the design and implementation of a metadata management system for a data

warehouse environment.Developing a data warehouse system is an exceedingly demanding and costly activity,with the typical warehouse costing in excess of $1 million [12]. Nevertheless, data warhousing has become a popular activity in information systems development and management. According to the market research firm Meta Group, the proportion of companies implementing data warehouses exploded from 10% in 1993 to 90% in 1994, and the datawarehousing market will expand from $2 billion in 1995 to $8 billion in 1998. Improving access to information and delivering better and more accurate information, is for more and

more companies a motivation for using data warehouse technology


1.what is difference btw summary filter and detail filter, plz explain with example?

Detail fiter is applied to whole body of the report.

Summary filter is applied to only at footer level.

For Example if we take Product Line, Product Type and Price and then take total on Price and apply the filter saying Price > 20000 (detail Filter) then its going to display those values which are greater than 20000 in body.

It will be oppsite in Summary Filter i.e for summary footer its going to check the value which is greater than 20000 at footer level.

2.What is the use of Generated Prompt? Where is it used in realtime project?

Generated Prompt is a prompt which is made on data type of Query Item. For Example Go Sales and Retails Package take Product Line and then click on Build Prompt Page, then by default Value Prompt would be displayed.When you don't know which prompt would be suitable to desired Query Item. can create a report like if the salary>50,000 display balance,if salary<50,000>

This is can be done using conditional formating


From insertable object pan double click on query items which ever u want

select the revenue data item, from the change layout menu

click on define conditional styles

Enter the new value 50,000 click on insert again enter the new value 50,000 click on insert

For one revenue 50,000 and greate select the style as you

want and for another revenue 50,000 and below select the style or color as you like

Finally click ok other one process follows like

In c8 you can achieve this by following these steps

1)drag a conditional block

2) Assign a boolean variable(in expression box if(sal>50k) to the conditional block

3) go to conditional explorer you find two conditions yes and no

4)when current block is yes...drag a layout calulation (sal-50k)

5) When current block is no... use select background color of salary as gree

4.write a query to display to 10 max record from the employee table?

To find nth value

select salary from emp a where &n=(select count(distinct

salary) from emp b where a.empno<=b.empno);

to find max by nth value a.empno<=b.empno

to find min by nth value a.empno>=b.empno


To find 10 max records

select * from (select * from emp order by salary desc) where


2nd answer

select * from emp a

where (select count(distinct sal) from emp b where



this is the exact query........when there are duplicate salaries are there also........

5.what is sid?

Sid means sarrogte key. It has system generated artificial keyvalues, which allows to maintain historical records in the data warehouse more effectively.

6.what is diff b/w macro and parameter?

Parameter:Specifies the parameter that is satisfied by values chosen in the prompt control.

Applies to: Date & Time Prompt, Date Prompt, Generated Prompt, Interval

Prompt, Select & Search Prompt,Text Box Prompt, Time Prompt, Tree Prompt, Value Prompt

Macro:A macro is a runtime object, which uses the parameter maps and sesssion parameters in developing the conditional query subjects.

7. What is the difference between filter & Condition?

Filter: A filter is a condition which restricts the amount of data displayed in the report.

Condition:An expression that yields a boolean value. Conditions are used in query expressions, query filters, and boolean report variables that can be used for conditional formatting, styles, data sources, layouts and blocks.

8. what are summary filters? grouped filters?

Summary filter: If the filter applied after aggregation then aggregated records are filtered.

Group filter: A filter which is applied to the group is known as group filter

9. what is filter?give me one example that you created a report

in your project?

We use filter to view limited data. For eg:

a list report with two columns country and revenue

eg: china 1235939

india 40393493

japan 938743739

US 32478327

UK 9420834028

if you drag a filter in query explorer and in expression box you typed country = 'japan'.. if you run the report you will be able to view only japan 938743739. you limited data to japan. if you want to see for two or three or more use IN function

10. what is prompt types of prompt example briefly each types of

product (eg:values, date,search,prompt etc.)

VALUE PROMPT: Value Prompt Retrieves data based on values that you select from a list.

Use this control to show the list of possible values from which users can choose.

Note: The maximum number of items that can appear in a list Is 5000.

SELECT AND SEARCH PROMPT:Select & Search Prompt Retrieves values based on search criteria that users specify. Data is then retrieved based on values users select from the search results.Use this control instead of a value prompt if the list of values is very long, which can slow down performance.

Tip: Users have the option of performing a case sensitive or case insensitive search. A case sensitive search is faster, while a case insensitive search usually returns more values.You cannot use this control if you are working with SAP BW Data sources.

DATE PROMPT:Date Prompt Retrieves data based on a date that you select.

Use this control when you are filtering a date column. With This control, users can retrieve data for a specific day, a set of days, or a range of days.

TIME PROMPT:Time Prompt Retrieves data based on a time that you select.

Use this control to restrict a report to a particular time or time range. For example, you can use this control to see how many orders are received after business hours. You can then use this information to determine the number of staff needed to work after hours.

DATE AND TIME PROMPT:Date & Time Prompt Retrieves data based on a date and time that you select.Use this control when you are filtering a datetime or

timestampcolumn. This control is useful for specifying ranges. For example,

you can retrieve all orders received from Monday at 12:00 a.m. toFriday at 5:00 p.m.

INTERVAL PROMPT: Interval Prompt Retrieves data based on a time interval tha you specify.Use this control to retrieve data that is related to the passage of time. For example, you can use this control to retrieve a list of products that were returned 30 or more days after they were purchased.

TREE PROMPT: Tree Prompt Retrieves data based on values you select from a list. Values are organized hierarchically. This control is useful when you are working with dimensional data sources. Data is shown from the top of a dimension hierarchy to the most detailed member, and users can choose th level of detail they want to view in the report.

GENERATED PROMPT: Generated Prompt Selects a prompt control based on the data type of the data item.This control acts like a placeholder. When you run the

report, the control is replaced by the appropriate prompt control. For example, if you are prompting date values, the control is replaced by a date & time prompt.

11. what are the types of facts ? explain them?

Three types of Fact...

1. Additive Fact: Additive facts are facts that can be summed up through all of the dimensions in the fact table.

2. Semi-Additive Fact: Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others.

eg : Bank Balances - you can take a bank account as Semi-Additive since a current balance for the account can't be summed as time period; but if you want see current balance of a bank you can sum all accounts current balance.

3. Non-Additive Fact: Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table.

Eg: . Ratios, Averages & Variance

12.what is tabular set? how can you break the report?

1. Tabular Set: Tabular set object returns a single result set using UNION, INTERSECT or MINUS operators.

2. Breaking the report: Question is not clear.. because, using tabular set, we combined two model or query using UNION, INTERSECT or MINUS operators.

13. if there are list,crosstab,chart how can you apply a single filter to all object?

first u drag list,crosstab,chart objects to the report pane then select cross tab under properties pane under data option in query option select query1.

do this same process for chart then applied the filter.nothing but if u have more objects in report associated all objects to one query.

14. How to use Cascade Macro in Report studio? Anybody know about Cascade Macro & and the procedure to use?

Using Cascade Macro, we can create more than one condition(ie. Sales<500>2000 or 2000>sales>500).End user may choose the condition and based on that he can get the Report.

15. 1)How can we do the Union in Report Studio?

1. For making Union query Both the input queries should contain same columns.

2)What is the meaning of Aggregation and Rollup Aggregation?

2.Aggregation refers to Detail level Aggregation Roll Up Aggregation is for Summary Level Aggregation

3)How can we hide the columns in Report Studio and How can we view the hiding Objects?

3.By making Box Type property None, we can hide a column. To see the Hidden objects click on Visual Ads button and check View Hidden objects

4)Can we provide joins in Report Studio and in which situation we can go for outer join?

4.Queries can be joined making use of join query. when there is any null record in the joining column which is to be fetched then we go for outer joins(1..0)

5)If we take a Date Prompt and run it once it will show default data,how can we change it exact 1 month back data?

5.To change the default date selection in the date prompt,Java script need to be implemented.

16.Which service handles requests for cognos connection,query studio and event studio.

Web Services handles the request for these.

17.where do we see the run time of the report ?

u can use <> function .its located at teh bottom of the report page along with the as of date and pagenumber functions u drag and drop the as of time function in your report page'll give u the time when the report was run.

18.if report performance is low how can check 3 areas in framework manager?.what are the 3 areas concentrate?

If the report performance is Low then we need to check how many rows does it actually genarating per page..wht's the time limit for the report...

In cognos configuration we need to reduce the load on presentation Service..

19.Suppose you have a report with no data , when you run the report in report studio how can you tell the end user , that it has no data , while he runs it?Is it possible ? if so how?

Suppose there is no data in the report, when you run the report, only the title will be displayed, data part will be empty.

In such scenario, you have tell the user that the selection criteria is not met with the business rules.

Eg. user run "monthly stock report" with selection of "week-15" as time selection, will not display the record.because, this report associated with calander month, so any month has to be selected.. Last date of the month is considered for monthly stock report.

20.Can anyone tell me the step by step process how to connect cognos to oracle?

I think when during the ETL process ETL people will do the connections to the required RDBMS in the form of ODBC connections in taht way they can import the tables from data source to target table (i.e in data ware house) data warehouse acts as a data source to the BI modelers for creating the interface.the project contains query subjects .which contains the SQL ETL people do the odbc connections during ETL process.

21.What is value prompt limitations while creating report?What is select and search prompt?

Value Prompt Limitation:

If you do prompting for a range of values(specified range) using IN_RANGE operator,under FROM and TO tabs a range of available values will be displayed.In that you need to select the values.Incase if the available values are in a range of decimals ,it is difficult to select .EX:0 5000$(values to be prompted).You can not enter this specified range.under FROM tab, you need to search for 0& value and under TO tab ,you need to select 5000& from the available values.Values displayed under FROM ,TO tab will be in random.imagine if the values are in decimals(ex:5000.009) ,then Is it easy to select the value 5000.009)under TO tab ,if the values displayed in random?

To overcome this...we need to go for Text PROMPT for prompting Specified range.what ever range you specified ,it willbe get prompted.when you need to cascade,use either value or select&search Prompt.

Select & search Prompt:

when you need prompt the values that contains or starts with specific characters then,enter them under SELECT tab and then click SEARCH tab.Then maching values will be displayed in search the values and click right arrow to move this values to diplay box and click finish(click next if another prompt presents).in short it is used to display one 0r multiselect values(=,IN operators).IN_Range operator cannot be available.

22.How can we create top 10 and bottom 10 records in the same list?

for example if u want to create top ten employees who are generated max revenue then add new dataitem with expression rank(total(revenue) for employeename) and in filter deffination dataitem<10>

23.If I select the current month and year how can we display previous 11 months data?

A filter which filters the data for the last 11 months should be defined.

The below filter with mior modifications will work. …(I did not try)… treating parmater1 as the prompt parameter

month_field between (parameter1-11) and (parameter1)..

U can also use addmonth function i.e addmonths(parameter1,-11)

24.How can we do the unit testing?

Unit testing is something that a report developer does from his end before the report is passed on to UAT. Generally TOAD is used to write SQL to validate the report. In my case, I have used TOAD to do the unit testing. (If you don't have TOAD, I am not really sure if there is any other to do it.)

25.what is Governor settings in cagnos?

Governors setting is used to reduce system resource requirements and improve performance. You setgovernors before you create packages to ensure the metadata in the package contains the specifiedlimits. All packages that are subsequently published will use the new settings.There are a number of governors that you can set. Governor settings in the model can be overridden by governor settings in Report Studio.

1.Report Table Limits:You can control the number of tables that a user can retrieve in a query or report

2.Data Retrieval Limits:You can control the number of rows that are returned in a query or report

3.Query Execution Time Limits:You can limit the time that a query can take

4.Large Text Items Limit:You can control the character length of BLOBS (binary large objects) that a user can retrieve in aquery or report

5.Allow Enhanced Model Portability at Run Time:You can use a separate but structurally similar data source at run time. The database schema mustbe the same between the two data sources.

6.Allow Usage of Local Cache:Select this governor to specify that all reports based on this model should use cached data.

7.Outer Joins:You can control whether outer joins can be used in your query or report.

8.Cross-Product Joins:You can control whether cross-product joins can be used in your query or report.

9.Use With Clause When Generating SQL:You can choose to use the With clause with Cognos SQL if your data source supports the Withclause.

10.Suppress Null Values for Dimensional Data Sources:This governor is supported for SAP BW data sources only. It does not work with relational or OLAP data sources.Some queries can be very large because null values are not filtered out. Because null suppression is performed by the underlying data provider, in this case SAP BW, this reduces the amount of data transferred to the various Cognos 8 client products, thus improving performance and returning only relevant data in the report.

26.what is the different between primarykey and surrogated key?why we are going for surrogatekey?

Primary key as all of u know this is a unique key,surrogate key is a non -intelligent key which is used to replace a composite keywe are going for surrogate key bcz suppose a person is living in one city and then moved to other and the person id is a primarykey.we need to update the location then we need to add the updated record but the primary key wont accept to add the record again for this purpose we will go for surrogate key

27.How to hide/unhide the prompts which are not used by unauthorized users?

1) click on the prompt and go to properties pane

2) in the properties pane u will have a property called visible and set it to 'No'

3) For this set the default value as 1 if he is not going to select any thing as it should go to next page.

4) in this way u can hide the prompt. How to hide prompt?(2nd answer follows like below)

1)drag the conditional block from toolbox.2)place the prompt inside the conditional block.3)select the prompt ,properties pane set visible as no.

4)in work area prompt will not be visible now.but when u run the report u can see the prompt

28.How can you improve performance in report studio level?What is the difference between Cognos sql and Native sql?How can you create cubes in framework manager?

report studio performance depends on net work spead,complex calculations, using not in operator,outterjoin.Cognos SQL and Native SQL are two different types of SQL generated by cognos tool. Native SQL is sub set or same as Cognos SQL.Native SQL contains only the functionality that got executed at database end, apart from this Cognos SQL will have the Locally processed logic as well.we can create package using cube.Cube is a data source here]

29.what r the claims? how many type of claims we face from client side?clients side cliams are only about the report specification

30.How to schedule the report in Cognos? And i have 2 users u1,u2. how to schedule the report for these 2 users(the 2 users will get result once for a week) but u1 get the result on every monday to his inbox,and u2 get the result on every wednesday to his inbox.

Example - Schedule a Report

You can schedule entries that are stored in Cognos Connection, such as reports, queries, and agents, so that they run at a time that is convenient for you.In this exercise, you will learn how to save a report as a report view .

schedule a report so it runs daily

use the schedule management tool to view the status of the schedule

You are a business analyst for the Great Outdoors company. You want to schedule the Return Quantity by Product Line Chart report because some people in your organization need the data updated daily. You schedule the report to run daily, and then check that it is properly scheduled.Things to Notice

When a report runs according to a schedule, the report output is saved and appears in the Action column in Cognos Connection.At any time, you can click the arrow next to the help button, and click Go to the Welcome Page to return to the Cognos 8 Welcome page.

Steps Open Cognos Connection: Start your Web browser.

In the address bar, type the URL supplied by your administrator, and then press Enter. The URL looks something like this: http://servername/cognos8

In the Cognos 8 Welcome page, click Cognos content, click the GO Data Warehouse (analysis) package, and then click Query Studio Report Samples.

Locate the Return Quantity by Product Line Chart report. You can also use any one of the Cognos sample reports.

Create a report view for a daily view of the report:

In the Actions column, click the create a report view button next to the report.

In the Name box, remove the words Report View of from the beginning of the name, type (Daily) at the end of the name, and then click Finish.

A new entry appears in Cognos Connection, with the report view icon .

Schedule the report view to run daily:

For the entry, in the Actions column, click the schedule button .

Under Frequency, click the By Day tab.

Accept the default time and other defaults. Click OK.

Check the schedule:

Next to the my area icon , click the down arrow, and click My Activities and Schedules.

Click Schedules.

In the Filter pane, select Any status and Any priority, and click Apply.

If you are accepting the default settings, you will not see any changes in the schedule.

Your report view appears in the list of scheduled reports.

Try on Your OwnModify the schedule so that the report runs weekly.

Tip: In the Actions tab, click More, and then click Modify the schedule.

Permanently remove the schedule.

Tip: In the Actions tab, click More, and then click Remove the schedule.

31.For example I developed some reports on employee package nowI have to develop same reports to another package.How can Ichange reports to another package?

Open the report go to file

from the drop down list select "Report package" change the package u want

now change the Definition expression of all the query items

and filters like u have a query item name Employee Name in the

Definition Expression it will may appear like this

[Empl Package][Emp Details][Employee name] now change it to

[Other Package][Emp Details][Employee Name] Thats it reportnet how u burst the reports for every 5th day, 10 th day, 15 th day, 20 th day

set burst options in report studio,enable the report for bursting in cognos connection,create 4 report views for that report,create a shcedule for each report view - the 1st for every 5th day, the 2nd for every 10th day, the 3rd for every 15th day, & the 4th for every 20th day.Iam sure, this can work.....

33.What is Cardinality

cardinality is nothing but relation between tables like 1 to 1, 1 to many and many to many.cardinality specifies the nature of relationships between the query is numeric notation .the first type says about the type of join ie.,inner or outer join and second one says about the number of records.

34.Give me some examples for Lifecycle reporting,i mean whichlifecycle we will use for reporting

There is no specific reporting lifecycle

We can generate the reports from datawarehouse/data marts or

we can generate the reports directly from OLTP systems.

wt i mean generating reports from OLTP systems is Loading

data into one system and generating the reports.

But this is not recommended. This will depends on the business

35.three ways sql*loader could load data, what are those three types?Conventional Path Loads, Direct Path Loads, External Table Loads

Conventional Path Loads: During conventional path loads, the input records are parsed according to the field specifications, and each data field is copied to its corresponding bind array. When the bind array is full (or no more data is left to read), an array insert is executed.

Direct Path Loads: A direct path load parses the input records according to the field specifications, converts the input field data to the column datatype, and builds a column array. The column array is passed to a block formatter, which creates data blocks in Oracle database block format. The newly formatted database blocks are written directly to the database, bypassing most RDBMS processing. Direct path load is much faster than conventional path load, but entails several restrictions.

External Table Loads: An external table load creates an external table for data in a data file and executes INSERT statements to insert the data from the data file into the target table

36.what is the difference between DSS and ODS ?

ODS DSS is designed to support is designed to support

operational monitoring decision making process

2.Data is volatile 2.Data is non-volatli

3.Current data 3.Historical data

4.Detailed data 4.Summary data





Framework manager (FM)
1.Diff b/w presentation layer and physical layer
2.How to create a model
3.Dimensions and measures in model
4.How you create a prompt in FM
5.How to publish package in network or local system
6.What errors occurred when verify the model
7.How to create filters in FM and types of filters in FM
8.What is the use of segment and how to create it
9.Governer settings in FM
10.How would you maintain the relations in FM?
11.How do you specify the joins?
12.What is parameter mapping?
13.In how many ways to provide the security in FM
14.How many kinds of quality subjects
15.Reportnet server components
16.Cognos connection components
17.How do you provide package security?
18.What type of problems arise when create a join
19.What is the use of multiple packages?
Report studio
1.What is cascading prompt?
2.In report page how do you create cascading prompts
3.Diff b/w section and group by
4.Types of filters and where you are using group filters and tabular filters
5.Conditional explorer where you are using
6.What is performance reporting?
7.Types of prompts
8.Tabular model,tabular reference, tabular set, tabular sql
9.How to use burst option and when
10.Diff b/w Report studio and query studio
11.What is lock and unlock objects.
12.How do you test the reports in QS
13.Do you have any idea about migration from impromptu to CRN and TF model to CRNQueryStudio
How do you use prompts in Q.SSQL
Definitions as well as syntax and queries
What is group by clause?
Example-write the query for dept wise employee
Diff b/w where clause and having clause
Def of Group by clause, having, where clause
Diff between Union and union all and intersection
Types of join in sql and expl


1. Difference between filter & condition

The major difference between query conditions and document filters is that a condition is applied to the query and limits the data retrieved from the database, whereas the filter is applied to the data in the document to hide information and display only the information that you want to appear

2.what is the difference between a cascading report and drillthru report?why do we go for drill thru report?

cascading report contains one report and it work based on the condition

drill through contains two reports parent report(summary),child report(detail report)

child report contains a parameter that must accept the values passed from parent report

3.: what type problems we can face in general at repor...

the most common problems are -

1.No Data Appears in the Report (to remove this check data source or package)

2.Unexpected or incorrect values appears in reports , may be report is running with limited data

3.The report filter does not work, values are not coming in filter option is not able to oprm in excel ,cvs or xml

4.How create measures and Demensions?

You can create measure once you import all the data into the data source.You can create measures and dimensions by draging the required source from datasource into dimension map and measure tab.(need to find scope of measures for all the dimensions)

what was the actual purpose of portfolio in cognos

Portfolio is like a summary page. Using portfolio u can demonstrate a generated report or survey to your clients. on one single portfolio you can place the links of the report, some video clips, some images and many more items.

5.How can you restrict access onto different parts of report given a report?

In Cognos 8, you can do it in three stages
1) Package Level
2) Object level
3) Data level
Package level

Expan the namespace,
R click on package > Click on Edit Package
Package Access window appears,
Remove “All users” from it
Click Adds, select Cognos
Select “Report Administrator”,
OK it, save the package in Framework Manager (FM)
Publish the package
Object Level

Expan the namespace,
R click on a table > Click on “Specify Object Security”
On “specify Object Security” window,
click on ADD
On “select users and group” dialog box,
select Cognos
Select Report Administrator & Consumers
Click Add
In new window, under allow Report Administrator, deny consumer
Now R click on namespaces,
click on “ Specift Object Security
“Here allow both consumers & Report Admni
Right click on packages,
“Explore package “
window opens
Select “Object security tab” appears

You will be allowed see who is allowed and who is not
Data level

Right click on a table
Select query subject,
select “specify security filter”
“specify data security” dialog box appear
click on ADD
select cognos
select consumers,
Ok it
click the first row,
click on insert
In Filters folder,
select the appropriate filter
save the package and publish it

6.What are contents vailable in cognos 8 ?

1)framework manager

2)metric designer

both of above two connects to COGNOS CONNECTION to run, view and manage contents:

cognos viewer - view reports

cognos office connection - cognos content on MS office

metric studio - monitor performance

query studio - create queries

analysis studio - analuse items in business data

event studio - monitor events

report studio - create reports


Informatica interview questions & FAQs

What is a source qualifier?
What is a surrogate key?
What is difference between Mapplet and reusable transformation?
What is DTM session?
What is a Mapplet?
What is a look up function? What is default transformation for the look up function?
What is difference between a connected look up and unconnected look up?
What is up date strategy and what are the options for update strategy?
What is subject area?
What is the difference between truncate and delete statements?
What kind of Update strategies are normally used (Type 1, 2 & 3) & what are the differences?
What is the exact syntax of an update strategy?
What are bitmap indexes and how and why are they used?
What is bulk bind? How does it improve performance?
What are the different ways to filter rows using Informatica transformations?
What is referential Integrity error? How do you rectify it?
What is DTM process?
What is target load order?
What exactly is a shortcut and how do you use it?
What is a shared folder?
What are the different transformations where you can use a SQL override?
What is the difference between a Bulk and Normal mode and where exactly is it defined?
What is the difference between Local & Global repository?
What are data driven sessions?
What are the common errors while running a Informatica session?
What are worklets and what is their use?
What is change data capture?
What exactly is tracing level?
What is the difference between constraints based load ordering and target load plan?
What is a deployment group and what is its use?
When and how a partition is defined using Informatica?
How do you improve performance in an Update strategy?
How do you validate all the mappings in the repository at once?
How can you join two or more tables without using the source qualifier override SQL or a Joiner transformation?
How can you define a transformation? What are different types of transformations in Informatica?
How many repositories can be created in Informatica?
How many minimum groups can be defined in a Router transformation?
How do you define partitions in Informatica?
How can you improve performance in an Aggregator transformation?
How does the Informatica know that the input is sorted?
How many worklets can be defined within a workflow?
How do you define a parameter file? Give an example of its use.
If you join two or more tables and then pull out about two columns from each table into the source qualifier and then just pull out one column from the source qualifier into an Expression transformation and then do a ‘generate SQL’ in the source qualifier how many columns will show up in the generated SQL.
In a Type 1 mapping with one source and one target table what is the minimum number of update strategy transformations to be used?
At what levels can you define parameter files and what is the order?
In a session log file where can you find the reader and the writer details?
For joining three heterogeneous tables how many joiner transformations are required?
Can you look up a flat file using Informatica?


mithun said...
This comment has been removed by the author.
Sudhakar said...

Good Collection...
Really worth...
Very Useful for facing interviews...

sanjay chowdhary said...

Cognos Online Training| Cognos10 Online Training| Cognos BI Training

sanjay chowdhary said...

Cognos Online Training| Cognos10 Online Training| Cognos BI Training