Monday, 30 June 2014

Oracle Database 11g Installation on Redhat Linux 5

Check Physical Memory
 (At least 1GB of physical Memory (RAM) is required. EX: 2059516 kB) 
 
$ grep MemTotal /proc/meminfo

Check Swap Space.
(RAM is 2GB, so you have 3GB of Swap Size EX: 3148732 kB)
 
 
$ grep SwapTotal /proc/meminfo

If you don’t have a swap size 3gb. so you need to add one more swap file.

Check space available in /tmp
(You need to have at least 150M to 200M of space in the /tmp directory.)
 
$ df -h /tmp

Create new Usergroup and User Account
Login to root
/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
/usr/sbin/useradd -g oinstall -G dba oracle
/usr/bin/passwd oracle

Create directories where the Oracle Software and database will be installed.
$ mkdir -p /u01 or (/u01/app/oracle/product/11.2.0/dbhome_1)
$ chown -R oracle:oinstall /u01
$ chmod -R 775 /u01

Disk space requirements in /U01
(You need to have at least 10BG to 20GB of space in the /U01 directory.)
 
# df -h /u01


Next we need to adjust the Linux Kernel Parameters to support Oracle.
Open /etc/sysctl.conf and add the following lines:
 
fs.file-max = 65536
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65500
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

Make the kernel parameters changes effective immediately
 
 
$ /sbin/sysctl -p

Now set the shell limits for the user Oracle.
Open /etc/security/limits.confand add these lines.
oracle           soft    nproc   2047
oracle           hard    nproc   16384
oracle           soft    nofile  1024
oracle           hard    nofile  65536

Where "nproc" is the maximum number of processes available to the user and "nofiles" is the number of open file descriptors.
Open /etc/pam.d/login and add the following line if it is already not there.
session    required     pam_limits.so

Disable secure linux
Open  /etc/selinux/configfile, making sure the SELINUX flag is set as follows:
 
SELINUX=disabled

Now switch to the user oracle.


$ su – oracle
 
$ echo $SHELL
 
 
If the returned shell is bash then open ~/.bash_profile and add these lines:


# Oracle settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
 
# If /tmp doesn't have 200M space free then you can workaround it by
# pointing the variables TMP AND TMPDIR to a location where you have 
# sufficient space.
 
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
ORACLE_SID=ora11g; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=$ORACLE_HOME/bin:/usr/sbin:$PATH; export PATH
 
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export CLASSPATH
 
if [ $USER = "oracle" ]; then
   if [ $SHELL = "/bin/ksh" ]; then
     ulimit -p 16384
     ulimit -n 65536
   else
     ulimit -u 16384 -n 65536
   fi
fi
 
umask 022


Now run the following command to make these changes effective in the current session of user oracle.

-- for bash shell
$ source ~/.bash_profile
 
 
Once the unzip is finished go to the "database" directory unzipped in the previous step and start the Oracle Universal Installer.
Copy into database to /u01 directory
$ cd u01/database
 
$ ./runInstaller
 
 
 

Open another console and login as root. Execute following two scripts once logged in successfully.

$ /u01/app/oraInventory/orainstRoot.sh

$ /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

After that login to Oracle user and connect sqlplus
 
 
$ sqlplus / as sysdba
 
Check oracle installed version using below query.
 
SQL> select * from v$version;
 
If Database not yet started use below commands to start database.
 
SQL>startup
 
 
And check listener status if u want to connect oracle tools (EX: Toad..)
 
SQL>lsnrctl status
 
Start Listener
----------------
 
SQL>lsnrctl start
 
 
Check Listener.ora file for host and port no.
 
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/
 
 
L1 =
  (ADDRESS_LIST =
     (ADDRESS= (PROTOCOL= TCP)(Host= localhost)(Port= 1521))
  )
SID_LIST_L1 =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME= $ORACLE_HOME)
      (SID_NAME = orcl)
    )
  )





If u have more than one listener when starting listener services use listener name:
EX:
SQL> lsnrctl start L1



SQL> lsnrctl status L1



SQL> lsnrctl stop L1
 
If u want to connect to TOAD
Find the ip address using   
$ip addr
EX:
Connect to toad 192.168.48.134 and SID: ORCL







Colour view error :

loginto root and

$   xhost location or  ( Ip Address ex:198.168.48.134)


$ xhost +

set DISPLAY your Domain and host.

$ DISPLAY=localhost.localdomain:0.0

$ export DISPLAY

$echo $DISPLAY




Add swap file space


Login to Root User in LINUX:

Determine the size of the new swap file and multiple by 1024 to determine the block size. For example, the block size of a 64 MB swap file is 65536.
At a shell prompt as root, type the following command with count being equal to the desired block size:


$ dd if=/dev/zero of=/swapf bs=1024 count=65536


Setup the swap file with the command:

$ mkswap /swapf

To enable the swap file immediately but not automatically at boot time:

$ swapon /swapf

Edit and add blow line in /etc/fstab

/swapf swap swap defaults 0 0


Restart your system and its will effect new swap file.

After that check that the swap file size is added or not using below command

$cat /proc/swaps


Then if u wants to remove or disable your added swap file use below commands.

Login to root user
$su – root

Disable swap file.

$swapoff swapfile

Removing swap file.

$ rm swapfile





JAVA HOSTSPOT VM client Error
Download below patch no( 8670579) this is oracle 11G R2 in linux 32 bit bub.

[applmgr@centos53server ~]$ dbua
#
# An unexpected error has been detected by HotSpot Virtual Machine:
#
#  SIGSEGV (0xb) at pc=0x9fdf64d4, pid=3662, tid=3086928080
#
# Java VM: Java HotSpot(TM) Client VM (1.5.0_17-b02 mixed mode)
# Problematic frame:
# C  [libnnz11.so+0x3c4d4]
#
# An error report file with more information is saved as hs_err_pid3662.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#
Aborted




we can apply this patch two ways

extract the zip file and copy config folder into your oracle home config folder
and lib folder all files into oracle home lib folder replace it.
and xml folder direct copy into your oracle home dir.

or

unzip above folder ( 8670579) and copy into oracle_home/OPatch/8670579

in linux
$ 8670579>opatch apply




Saturday, 28 June 2014

Oracle 12c New feature to get USER ID ORA_INVOKING_USERID

Oracle 12c New feature to get USER ID ORA_INVOKING_USERID


Returns the user id of the database user who invoked the current statement or view

Example:  (ORA_INVOKING_USER RETURN NUMBER)


SELECT ora_invoking_userid  FROM dual;

Oracle 12c New feature to get USER ORA_INVOKING_USER

Oracle 12c New feature to get USER ORA_INVOKING_USER
 

Returns the name of the database user who invoked the current statement or view

Example: (ORA_INVOKING_USER RETURN VARCHAR2)

SELECT ora_invoking_user  FROM dual;

Oracle 12c Greatest feature for developers can use Default value as sequences.

Oracle 12c Greatest feature for developers can use Default value as sequences while creating a table.

Default value can reference sequences
This is also for developers who struggle to maintain unique values in
Primary Key columns. While creating a table default column can be
referenced by sequence.nextval.

This feature is required for every developer while creating table we can declare default 
sequence name for PK column in place manual using in insert script.



Oracle 12c New Features Duplicate Indexes on the same set of columns

Duplicate Indexes  

Create duplicate indexes on the same set of columns. Like in large
applications you might want two different types of index on the same data.


Oracle 12C New features Analytical Function Improvements same as MYSQL.

Analytical Function Improvements same as MYSQL.

Top-N feature
As like MySQL(limit clause), Oracle 12c will allow Top-N feature in
queries. Analytical function improvements.

Examples:

SELECT EMPNO,ENAME,DEPTNO FROM EMP ORDER BY SAL
FETCH FIRST 5 ROWS ONLY;




Oracle Database 12c Supporting VARCHAR2 length up to 32767.


VARCHAR2 length up to 32767

This one will be one of the best feature for developers who always struggle
to manage large chunk of data. Current version of databases allows only
up to 4000 bytes in a single varchar2 cell. So developers has to either use
CLOB or XML data types which are comparatively slower that varchar2
processing.

Thursday, 26 June 2014

Data Warehousing Tutorial Part - 9

Parallelism and Partitioning

Data warehouses often contain large tables, and require techniques for both managing these large tables and providing good query performance across these large tables. This chapter discusses two key techniques for addressing these needs.
Parallel execution dramatically reduces response time for data-intensive operations on large databases typically associated with decision support systems (DSS). You can also implement parallel execution on certain types of online transaction processing (OLTP) and hybrid systems.

Overview of Parallel Execution Tuning
Parallel execution is useful for many types of operations accessing significant amounts of data. Parallel execution improves processing for:
  • Large table scans and joins
  • Creation of large indexes
  • Partitioned index scans
  • Bulk inserts, updates, and deletes
  • Aggregations and copying
You can also use parallel execution to access object types within an Oracle database. For example, you can use parallel execution to access LOBs (large binary objects).
Parallel execution benefits systems if they have all of the following characteristics:
  • Symmetric multi-processors (SMP), clusters, or massively parallel systems
  • Sufficient I/O bandwidth
  • Underutilized or intermittently used CPUs (for example, systems where CPU usage is typically less than 30%)
  • Sufficient memory to support additional memory-intensive processes such as sorts, hashing, and I/O buffers

When to Implement Parallel Execution

Parallel execution provides the greatest performance improvements in decision support systems (DSS). Online transaction processing (OLTP) systems also benefit from parallel execution, but usually only during batch processing.
During the day, most OLTP systems should probably not use parallel execution. During off-hours, however, parallel execution can effectively process high-volume batch operations. For example, a bank might use parallelized batch programs to perform millions of updates to apply interest to accounts..

Tuning Physical Database Layouts

This section describes how to tune the physical database layout for optimal performance of parallel execution. The following topics are discussed:
  • Types of Parallelism
  • Partitioning Data
  • Partition Pruning
  • Partition-wise Joins

Types of Parallelism

Different parallel operations use different types of parallelism. The optimal physical database layout depends on what parallel operations are most prevalent in your application.
The basic unit of parallelism is a called a granule. The operation being parallelized (a table scan, table update, or index creation, for example) is divided by Oracle into granules. Parallel execution processes execute the operation one granule at a time. The number of granules and their size affect the degree of parallelism (DOP) you can use. It also affects how well the work is balanced across query server processes.

Block Range Granules

Block range granules are the basic unit of most parallel operations. This is true even on partitioned tables; it is the reason why, on Oracle, the parallel degree is not related to the number of partitions.
Block range granules are ranges of physical blocks from a table. Because they are based on physical data addresses, Oracle can size block range granules to allow better load balancing. Block range granules permit dynamic parallelism that does not depend on static preallocation of tables or indexes. 

Partition Granules

When partition granules are used, a query server process works on an entire partition or subpartition of a table or index. Because partition granules are statically determined when a table or index is created, partition granules do not allow as much flexibility in parallelizing an operation. This means that the allowable DOP might be limited, and that load might not be well balanced across query server processes.


Partitioning Data
This section describes the partitioning features that significantly enhance data access and greatly improve overall applications performance. This is especially true for applications accessing tables and indexes with millions of rows and many gigabytes of data.
Partitioned tables and indexes facilitate administrative operations by allowing these operations to work on subsets of data. For example, you can add a new partition, organize an existing partition, or drop a partition with less than a second of interruption to a read-only application.

Types of Partitioning

Oracle offers three partitioning methods:
  • Range
  • Hash
  • Composite
Each partitioning method has a different set of advantages and disadvantages. Thus, each method is appropriate for a particular situation.
Range Partitioning
Range partitioning maps data to partitions based on boundaries identified by ranges of column values that you establish for each partition. This method is often useful for applications that manage historical data, especially data warehouses.
Hash Partitioning
Hash partitioning maps data to partitions based on a hashing algorithm that Oracle applies to a partitioning key identified by the user. The hashing algorithm evenly distributes rows among partitions. Therefore, the resulting set of partitions should be approximately of the same size. This makes hash partitioning ideal for distributing data evenly across devices. Hash partitioning is also a good and easy-to-use alternative to range partitioning when data is not historical in content.
Composite Partitioning
Composite partitioning combines the features of range and hash partitioning. With composite partitioning, Oracle first distributes data into partitions according to boundaries established by the partition ranges. Then Oracle further divides the data into subpartitions within each range partition. Oracle uses a hashing algorithm to distribute data into the subpartitions.

Index Partitioning

You can create both local and global indexes on a table partitioned by range, hash, or composite. Local indexes inherit the partitioning attributes of their related tables. For example, if you create a local index on a composite table, Oracle automatically partitions the local index using the composite method.
Oracle supports only range partitioning for global indexes. You cannot partition global indexes using the hash or composite partitioning methods.

Performance Issues for Range, Hash, and Composite Partitioning

The following section describes performance issues for range, hash, and composite partitioning.
Performance Considerations for Range Partitioning
Range partitioning is a convenient method for partitioning historical data. The boundaries of range partitions define the ordering of the partitions in the tables or indexes.
In conclusion, consider using range partitioning when:
  • Very large tables are frequently scanned by a range predicate on a column that is a good partitioning column, such as ORDER_DATE or PURCHASE_DATE. Partitioning the table on that column would enable partitioning pruning.
  • You want to maintain a rolling window of data
  • You cannot complete administrative operations on large tables, such as backup and restore, in an allotted time frame
  • You need to implement parallel DML (PDML) operations
The following SQL example creates the table Sales for a period of two years, 1994 and 1995, and partitions it by range according to the column s_saledate to separate the data into eight quarters, each corresponding to a partition:
CREATE TABLE sales
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
PARTITION BY RANGE(s_saledate)
(PARTITION sal94q1 VALUES LESS THAN TO_DATE (01-APR-1994, DD-MON-YYYY),
PARTITION sal94q2 VALUES LESS THAN TO_DATE (01-JUL-1994, DD-MON-YYYY),
PARTITION sal94q3 VALUES LESS THAN TO_DATE (01-OCT-1994, DD-MON-YYYY),
PARTITION sal94q4 VALUES LESS THAN TO_DATE (01-JAN-1995, DD-MON-YYYY),
PARTITION sal95q1 VALUES LESS THAN TO_DATE (01-APR-1995, DD-MON-YYYY),
PARTITION sal95q2 VALUES LESS THAN TO_DATE (01-JUL-1995, DD-MON-YYYY),
PARTITION sal95q3 VALUES LESS THAN TO_DATE (01-OCT-1995, DD-MON-YYYY),
PARTITION sal95q4 VALUES LESS THAN TO_DATE (01-JAN-1996, DD-MON-YYYY));
Performance Considerations for Hash Partitioning
Unlike range partitioning, the way in which Oracle distributes data in hash partitions does not correspond to a business, or logical, view of the data. Therefore, hash partitioning is not an effective way to manage historical data. However, hash partitions share some performance characteristics of range partitions, such as using partition pruning is limited to equality predicates. You can also use partition-wise joins, parallel index access and PDML.
As a general rule, use hash partitioning:
  • To improve the availability and manageability of large tables or to enable PDML, in tables that do not store historical data (where range partitioning is not appropriate).
  • To avoid data skew among partitions. Hash partitioning is an effective means of distributing data, because Oracle hashes the data into a number of partitions, each of which can reside on a separate device. Thus, data is evenly spread over as many devices as required to maximize I/O throughput. Similarly, you can use hash partitioning to evenly distribute data among the nodes of an MPP platform that uses the Oracle Parallel Server.
  • If it is important to use partition pruning and partition-wise joins according to a partitioning key.

The following example creates four hashed partitions for the table Sales using the column s_productid as the partition key:
CREATE TABLE sales
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
PARTITION BY HASH(s_productid)
PARTITIONS 4;

Specify the partition names only if you want some of the partitions to have different properties than the table. Otherwise, Oracle automatically generates internal names for the partitions. Also, you can use the STORE IN clause to assign partitions to tablespaces in a round-robin manner.
Performance Considerations for Composite Partitioning
Composite partitioning offers the benefits of both range and hash partitioning. With composite partitioning, Oracle first partitions by range, and then within each range Oracle creates subpartitions and distributes data within them using a hashing algorithm. Oracle uses the same hashing algorithm to distribute data among the hash subpartitions of composite partitioned tables as it does for hash partitioned tables.
  • Support historical data at the partition level
  • Support the use of subpartitions as units of parallelism for parallel operations such as PDML, for example, space management and backup and recovery
  • Are eligible for partition pruning and partition-wise joins on the range and hash dimensions
Using Composite Partitioning
Use the composite partitioning method for tables and local indexes if:
  • Partitions must have a logical meaning to efficiently support historical data
  • The contents of a partition may be spread across multiple tablespaces, devices, or nodes (of an MPP system)
  • You need to use both partition pruning and partition-wise joins even when the pruning and join predicates use different columns of the partitioned table
  • You want to use a degree of parallelism that is greater than the number of partitions for backup, recovery, and parallel operations

The following SQL example partitions the table Sales by range on the column s_saledate to create four partitions. This takes advantage of ordering data by a time frame. Then within each range partition, the data is further subdivided into four subpartitions by hash on the column s_productid.
CREATE TABLE sales(
s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice)
PARTITION BY RANGE (s_saledate)
SUBPARTITION BY HASH (s_productid) SUBPARTITIONS 4
(PARTITION sal94q1 VALUES LESS THAN TO_DATE (01-APR-1994, DD-MON-YYYY),
PARTITION sal94q2 VALUES LESS THAN TO_DATE (01-JUL-1994, DD-MON-YYYY),
PARTITION sal94q3 VALUES LESS THAN TO_DATE (01-OCT-1994, DD-MON-YYYY),
PARTITION sal94q4 VALUES LESS THAN TO_DATE (01-JAN-1995, DD-MON-YYYY));

Each hashed subpartition contains sales of a single quarter ordered by product code. The total number of subpartitions is 16.

Partition Pruning

Partition pruning is a very important performance feature for data warehouses. In partition pruning, the cost-based optimizer analyzes FROM and WHERE clauses in SQL statements to eliminate unneeded partitions when building the partition access list. This allows Oracle to perform operations only on partitions relevant to the SQL statement. Oracle does this when you use range, equality, and IN-list predicates on the range partitioning columns, and equality and IN-list predicates on the hash partitioning columns.

s_saledate and subpartitioned by hash on column s_productid, consider the following SQL statement:
SELECT * FROM sales 
WHERE s_saledate BETWEEN TO_DATE(01-JUL-1994, DD-MON-YYYY) AND
TO_DATE(01-OCT-1994, DD-MON-YYYY) AND s_productid = 1200;

Oracle uses the predicate on the partitioning columns to perform partition pruning as follows:
  • When using range partitioning, Oracle accesses only partitions sal94q2 and sal94q3
  • When using hash partitioning, Oracle accesses only the third partition, h3, where rows with s_productid equal to 1200 are mapped

Pruning Using DATE Columns

In the previous example, the date value was fully specified, 4 digits for year, using the TO_DATE function. While this is the recommended format for specifying date values, the optimizer can prune partitions using the predicates on s_saledate when you use other formats, as in the following examples:
SELECT * FROM sales 
WHERE s_saledate BETWEEN TO_DATE(01-JUL-1994, DD-MON-YY) AND
TO_DATE(01-OCT-1994, DD-MON-YY) AND s_productid = 1200;

SELECT * FROM sales
WHERE s_saledate BETWEEN '01-JUL-1994' AND
'01-OCT-1994' AND s_productid = 1200;

However, you will not be able to see which partitions Oracle is accessing as is usually shown on the partition_start and partition_stop columns of the EXPLAIN PLAN command output on the SQL statement. Instead, you will see the keyword 'KEY' for both columns.

Avoiding I/O Bottlenecks

To avoid I/O bottlenecks, when Oracle is not scanning all partitions because some have been eliminated by pruning, spread each partition over several devices. On MPP systems, spread those devices over multiple nodes.

Partition-wise Joins

Partition-wise joins reduce query response time by minimizing the amount of data exchanged among query servers when joins execute in parallel. This significantly reduces response time and resource utilization, both in terms of CPU and memory. In Oracle Parallel Server (OPS) environments, it also avoids or at least limits the data traffic over the interconnect, which is the key to achieving good scalability for massive join operations.
There are two variations of partition-wise join, full and partial, as discussed under the following headings.

Full Partition-wise Joins

A full partition-wise join divides a large join into smaller joins between a pair of partitions from the two joined tables. To use this feature, you must equi-partition both tables on their join keys. For example, consider a large join between a sales table and a customer table on the column customerid. The query "find the records of all customers who bought more than 100 articles in Quarter 3 of 1994" is a typical example of a SQL statement performing such a join. The following is an example of this:
SELECT c_customer_name, COUNT(*)
FROM sales, customer
WHERE s_customerid = c_customerid
AND s_saledate BETWEEN TO_DATE(01-jul-1994, DD-MON-YYYY) AND
TO_DATE(01-oct-1994, DD-MON-YYYY)
GROUP BY c_customer_name HAVING
COUNT(*) > 100;

This is a very large join typical in data warehousing environments. The entire customer table is joined with one quarter of the sales data. In large data warehouse applications, it might mean joining millions of rows. The join method to use in that case is obviously a hash join. But you can reduce the processing time for this hash join even more if both tables are equi-partitioned on the customerid column. This enables a full partition-wise join.
Hash - Hash
This is the simplest method: the Customer and Sales tables are both partitioned by hash into 16 partitions, on s_customerid and c_customerid respectively. This partitioning method should enable full partition-wise join when the tables are joined on the customerid column.

In serial, this join is performed between a pair of matching hash partitions at a time: when one partition pair has been joined, the join of another partition pair begins. The join completes when the 16 partition pairs have been processed.

Continue with  Data Warehousing Tutorial Part - 10 ..... :)

Data Warehousing Tutorial Part - 8

Data Warehousing Schemas

A schema is a collection of database objects, including tables, views, indexes, and synonyms. There are a variety of ways of arranging schema objects in the schema models designed for data warehousing. Most data warehouses use a dimensional model.

Star Schemas

The star schema is the simplest data warehouse schema. It is called a star schema because the diagram of a star schema resembles a star, with points radiating from a center. The center of the star consists of one or more fact tables and the points of the star are the dimension tables.



Other Schemas

Some schemas use third normal form rather than star schemas or the dimensional model.

Snowflake Schema :








Galaxy Schema


Data Warehousing Objects

The following types of objects are commonly used in data warehouses:
  • Fact tables are the central tables in your warehouse schema. Fact tables typically contain facts and foreign keys to the dimension tables. Fact tables represent data usually numeric and additive that can be analyzed and examined. Examples include Sales, Cost, and Profit.
  • Dimension tables, also known as lookup or reference tables, contain the relatively static data in the warehouse. Examples are stores or products.

Fact Tables

A fact table is a table in a star schema that contains facts. A fact table typically has two types of columns: those that contain facts, and those that are foreign keys to dimension tables. A fact table might contain either detail-level facts or facts that have been aggregated. 

Creating a New Fact Table

You must define a fact table for each star schema. A fact table typically has two types of columns: those that contain facts, and those that are foreign keys to dimension tables. From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys;

Dimensions

A dimension is a structure, often composed of one or more hierarchies, that categorizes data. Several distinct dimensions, combined with measures, enable you to answer business questions. Commonly used dimensions are Customer, Product, and Time.

Typical Levels in a Dimension Hierarchy


Dimension data is typically collected at the lowest level of detail and then aggregated into higher level totals, which is more useful for analysis. For example, in the Total_Customer dimension, there are four levels: Total_Customer, Regions, Territories, and Customers. Data collected at the Customers level is aggregated to the Territories level. For the Regions dimension, data collected for several regions such as Western Europe or Eastern Europe might be aggregated as a fact in the fact table into totals for a larger area such as Europe.

Hierarchies
Hierarchies are logical structures that use ordered levels as a means of organizing data. A hierarchy can be used to define data aggregation. For example, in a Time dimension, a hierarchy might be used to aggregate data from the Month level to the Quarter level to the Year level. A hierarchy can also be used to define a navigational drill path and establish a family structure.

Levels
Levels represent a position in a hierarchy. For example, a Time dimension might have a hierarchy that represents data at the Month, Quarter, and Year levels. Levels range from general to very specific, with the root level as the highest, or most general level. The levels in a dimension are organized into one or more hierarchies.
Level Relationships
Level relationships specify top-to-bottom ordering of levels from most general (the root) to most specific information and define the parent-child relationship between the levels in a hierarchy.

You can define hierarchies where each level rolls up to the previous level in the dimension or you can define hierarchies that skip one or multiple levels.






Continue with  Data Warehousing Tutorial Part - 9 ... :)