Tuesday, December 30, 2008

Make Money From Free E-books!

Have you ever wandered how those marketers that give away ebooks and reports actually benefit from doing so? It might seem a little strange that someone would put their time and effort into creating a quality product and then just give it away for nothing but there is an excellent reason for marketing this way, you see, those free ebooks will generate sales and leads for years to come, all on the backend.

What is the purpose of a free ebook?
The purpose of a free ebook is to drive traffic to a website that will result in sales, subscribers, adsense clicks or any other goal you may want to achieve. The main purpose for a free ebook is presell/generate leads for a paid product, For more details visit to www.create-own-ebook.com .You would presell by explaining advantages and benefits subtly in the ebook and generate leads by providing links within it.

How do I create one?
let's say you have a product about gardening that you sell for $47, you could then write a short report that would compliment that product such as a review, a short extract of it, a short report about gardening that would not go into as much detail as your paid product, the point is that it is very relevant to your paid product but does not have as much value, I mean you don't want to give away a free ebook that reveals all the methods contained in your paid one, you just want to generate interest for it.

How do I distribute it?
You can distribute a free ebook in a number of ways such as in your signature file at forums, to your email subscribers, free ebook directories, your website, arranging joint ventures with other marketers, there are many different ways to distribute your free ebook, To know more logon to www.create-free-pdf.com .You just have to find where your target audience is and offer them your ebook, most people love to get stuff for free so you shouldn't have to much trouble trying to get them to take it.

How do I get my ebook to go viral?
This is one of the most powerful ways to market online, viral marketing; you want your ebook to spread as far as possible and to reach as many prospects as you can so you should give people a reason to pass your ebook on to others for you. This can be done by allowing people to "rebrand" your ebook, basically you allow them to replace the links to your products with their affiliate links, and therefore the people who pass on your ebook can actually make money doing so. If people have a good reason to pass your ebook on, then they more than likely will.

Free ebooks can be an extremely effective way to advertise your website and products and if you can put the effort in and make a great product, then it is possible it will generate you sales and leads for years to come.
read more......

Sql Server Integrated Services Package Deployment

SQL 2005: SQL Server Integration Services
Package Deployment

Introduction

A decent amount of material has been covered in this introductory series on SQL Server 2005 Integration Services. Many of the basic data flow tasks have been covered along with some database maintenance tasks. It is now time to put these techniques into use.

Scenario

The package development is now complete and it is time to move the package into a working environment whether it is a testing environment or a production environment. Using some file configurations settings and SQL Management Studio, the developed packages will be deployed to a SQL server.

Implementation

If you have worked all the way through the series, you will have several packages to import. All though they each build on top of one another, this will give a good example on how to deploy several packages that are under the same SSIS project. So the first issue is how SQL server finds dtsx files. There are numerous ways but one that I prefer is to add some settings to the integration services configuration file to tell SQL server where to look for dtsx packages. Go ahead and open up Windows Explorer and navigate to C:Program FilesMicrosoft SQL Server90DTSBin.

1
[Click to see full-size]

Then open up MsDtsSrvr.ini in your favorite text editor add a meaningful name denoting the types of packages are contained in the folder and also add the physical folder path to where the dtsx files are located. For this example, Dev is the name and C:SSIS is the folder path of where the files are stored. The completed changes should look like the highlighted area in the following screen shot.

Please Visit Programminghelp.com For the full article and pictorial tutorial.

http://www.programminghelp.com/database/sqlserver/sql-server-integrated-services-package-deployment/

read more......

Linq Projection in Vb

This tutorial was created with Microsoft Visual Stuio.NET 2008. 2005 can be used, but you must install Microsoft's LINQ Community Technology Preview release.

In this tutorial we will look at LINQ Projection, which is when we can select specific data from a source without retrieving all fields. We will be creating a class to define a list in which we will create a number of people with IDs, names and cities. Then we will use buttons to select only parts of this data.

First, we will start off by creating a new Windows Form application in VS.NET 2008. Next, we will create a class - call it aList - and define our list object:


Public Class aList
Private _personID As Integer
Private _name As String
Private _city As String

Public Property PersonID() As Integer
Get
Return _personID
End Get
Set(ByVal value As Integer)
_personID = value
End Set
End Property

Public Property Name() As String
Get
Return _name
End Get
Set(ByVal value As String)
_name = value
End Set
End Property

Public Property City() As String
Get
Return _city
End Get
Set(ByVal value As String)
_city = value
End Set
End Property
End Class

This class defines a Property for each field we want, and its data type.
Next, we can add our Controls to the Form. We will add three buttons, and a Rich TextBox. The buttons will retrieve all of the IDs, Names and Cities, individually. This will demonstrates how we can retrieve the specific data that we want. Once we have our controls, we can move onto the code-behind of the form and define our data. We will add a few sample entries:

Please visit Linqhelp.com to complete this article. Happy Coding!

read more......

Linq Projection in C#

In this tutorial we will look at LINQ Projection, which is when we can select specific data from a source without retrieving all fields. We will be creating a class to define a list in which we will create a number of people with IDs, names and cities. Then we will use buttons to select only parts of this data.

First, we will start off by creating a new Windows Form application in VS.NET 2008. Next, we will create a class - call it aList - and define our list object:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace LINQProjection_cs
{
class aList
{
private int _personID;
private string _name;
private string _city;

public int PersonID
{
get
{
return _personID;
}
set
{
_personID = value;
}
}

public string Name
{
get
{
return _name;
}
set
{
_name = value;
}
}

public string City
{
get
{
return _city;
}
set
{
_city = value;
}
}
}
}

This class defines a Property for each field we want, and its data type.
Next, we can add our Controls to the Form. We will add three buttons, and a Rich TextBox. The buttons will retrieve all of the IDs, Names and Cities, individually. This will demonstrates how we can retrieve the specific data that we want. Once we have our controls, we can move onto the code-behind of the form and define our data. We will add a few sample entries:

Please vist LinqHelp.com to complete this article. Happy coding!

read more......

Querying Table Data Using Visual Basic Code in Ms Access

In order to fully utilize the capabilities of MS Access, one must learn not only learn the Visual Basic (VB) programming language, but should also learn Standard Query Language (SQL). Once a grasp of these two languages have been obtained, MS Access users can begin to build faster and more efficient databases.

One tool that has proved itself very useful to me over the years is querying data from tables or queries using VB and SQL code. A brief introduction to this process is presented in this article. To best understand this process, an example is provided below along with an explanation of its parts.

‘*********CODE***********

Dim rstTemp As Recordset

Dim strSQL As String

Dim routeNum As Integer

strSQL = "SELECT [Route], [Main Route PM], [Intersecting Route], [IntBeginPM], [IntEndPM] “

strSQL = strSQL + “FROM Intersections_list WHERE (((CStr([Route])) = """ + cmbRouteQuery + """));"

Set rstTemp = CurrentDb.OpenRecordset(strSQL, dbOpenDynaset)

If (Not (rstTemp.EOF)) Then

rstTemp.MoveFirst

routeNum = rstTemp(0)

‘************************

After the initial variable declarations, the code assigns an SQL statement to the string variable strSQL. This statement directs Access to gather all the data in the Route, Main Route PM, Intersecting Route, IntBeginPM, and IntEndPM fields of the table named Intersections_list. Furthermore, it directs Access to only gather information from these fields where the Route field is equal to a value held in the combo box cmbRouteQuery.

Once the SQL statement has been set, it is passed to the next line of code which executes it. It should be noted that the dbOpenDynaset variable is built into Access and holds an integer value that changes the type of recordset to open. For most general purposes, using dbOpenDynaset will work just fine.

The “if statement” in the code example verifies that the recordset just created contains information. If information is present, the code directs Access to move to the first record in the recordset. The code then stores the route in the first record (routeNum = rstTemp(0)) in the variable routeNum to be used for later use.

read more......

Virtual Private Database in Oracle Enterprise 11g

Oracle Enterprise Database11g has the Virtual Private Database feature to provide security features to your database. Virtual Private Database or VPD is very useful in situations when associated database roles and standard object privileges cannot provide application security requirements. You can set the Virtual Private Database policies to be simple or complex depending upon the amount of security you need to provide to the database.

You can create a secure virtual private database to keep it safe from unauthorized access. Virtual private database is used in environment where multiple users access the same database and only specific information should be available to each group. The best way to secure your virtual private database is to implement security features during its creation or designing. The level of security is very high as you secure your database instead of controlling it with some other application.
span class="fullpost">
Best way is to associate security policies with the views and tables of the database. It is designed in such a way that security policy is implemented whether you access the data directly or indirectly. What is more? You can also define security policies for a set of statements that eliminates the need to develop security policies individually for all statements. It is also possible to apply multiple policies for a group of views, synonym or tables.

A new feature known as Column Masking is also used with Virtual Private Database which overcomes the drawbacks of Column relevance. Main problem with column level Virtual Private Database security was that it restricted the rows that contains data for sensitive columns. However, with column masking the data of all such rows is displayed where the sensitive columns have null value. This way more information is available for the authorized users and only the sensitive information is hidden.

Virtual Private Database can be made more secure by providing security at the column or row level by combining VPD with application context feature. Providing security at such deep levels was termed as fine-grained access control or FGAC where you can secure a row or column separately also. Whenever a DML or DDL query is initiated by the user Oracle Database dynamically modifies the query before data retrieval or data manipulation. However, the user is unaware of the security procedures followed at back end, as it is transparent for users and whenever he or she access the data only the authorized information is shown. Moreover, you need not to modify your application code whenever you want to change any of the security policies. Just change the Virtual Private Database policies to grant or deny access to any part of database. Irrelevant of the fact that you use any source to connect to the database, that is, whether you use an application, SQL or web interface, there is no way by which your application security can be infected.

Various other types of VPD policy types such as Static, Shared and Context-Sensitive are also used to provide a better level of security. You may use context-sensitive and static policies to secure multiple database objects. Shared policies would save your overheads on re-executing policy functions repetitively for every query.

read more......

Accessing Data Using Oracle From Non-oracle Databases

Accessing Data of Non-Oracle Databases from Oracle

Introduction:

This document gives an overview of the Heterogeneous connections and the steps for configuration to access the data of non-oracle databases from Oracle database environment.

ACCESSING NON-ORACLE DATABASES FROM ORACLE

There may be some requirements to access the Data residing in different flavors of Databases like MS-SQL, Access, and Sybase from the Oracle database. This can be achieved with the help of creating Heterogeneous Services to connect to non-oracle flavors of the database and also integrate the data residing in them. The HS (Heterogeneous Services) is created with the help of the ODBC drivers for that particular flavor of the database (e.g. For MS-Access we need MS driver for Access). Generic Connectivity is implemented by using a Heterogeneous Services ODBC agent. An ODBC agent is included as part of the Oracle system and is installed in the same ORACLE_HOME and resides in the folder HS.

The following steps needs to be performed in order to configure a Heterogeneous connection in the Oracle Database:

1. Preparing the Non-oracle environment from where in the data needs to be integrated into the Oracle database.

2. Creation of the ODBC connection.
3. Test the ODBC drivers to ensure that connectivity is made to the non-Oracle database.
4. Ensure the Global_names parameter in Oracle database is set to False.
5. Configure the Heterogeneous services. This is done with the help of creating an initodbc.ora.

7. Modify the Listener.ora and the TNSNAMES.ORA file so that to connect to the Database.

8. Restart the Listener or START the listener if a new one has been created specific for the new connection.
9. Creation of the Database Link to connect to the HS connection.

10. Test the connection using thee DB Link.

Let us see the above steps in a brief fashion. Here we can consider of having a MS-Access Database from where in the data needs to be accessed into Oracle database:


1. Create the MS-Access database or copy the same to the local server where in the Database is hosted.

2. Create the ODBC connection. This can be done as follows:


Click on START à Control Panel à Administrative Tools and then open Data Sources (ODBC)

Click on the SYSTEM DSN Tab and then click on the ADD button.

Select Microsoft Access Driver (*.mdb) and then click on FINISH.

After that, enter the Data source name using which you would like to connect to the MS-Access Database (here in our case it’s ChryslerMDB). Also, select the MS-Access file by clicking on SELECT button and then click on OK to complete the configuration.

1. Check for the connectivity and confirm whether the ODBC connection is working fine.

2. Once this is done, Check for the GLOBAL_NAMES parameter in the Oracle database and make sure that it is FALSE. The following query can be used for the same :

********************************************************************************

SQL> select name,value from v$parameter where name like 'global_names%';

NAME

----------------------------------------------------------------

VALUE

----------------------------------------------------------------------------

global_names

FALSE

SQL>

*******************************************************************************
1. Once this is done, we need to create the INIT.ORA file for the HSODBC connection that needs to be created. This needs to be done in %ORACLE_HOME%\ oracle\ora92\hs\admin folder. (In UNIX, it will be $ ORACLE_HOME\ oracle\ora92\hs\admin). Create a file saying init.ora file with the following contents : In our case the file should be named as initchryslermdb.ora


HS_FDS_CONNECT_INFO = chryslermdb

HS_FDS_TRACE_LEVEL = 0

In the above chryslermdb id the non-oracle database connection string or the DSN name that we created in Step 2.

1. Once this is done, the next step is to update the Listener.ORA file. We can use the same listener that is been used by the database to access the ODBC connection or a different one can also be configured. Here we will create a new listener for the ODBC connection.

Edit the Listener.ora file in ORACLE_HOME\ora92\network\admin folder and add the following entries :

LISTENER_MSACCESS =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT = 1522))

)

)

)

SID_LIST_LISTENER_MSACCESS =

(SID_LIST =

(SID_DESC =

(SID_NAME = PLSExtProc)

(ORACLE_HOME = C:\oracle\ora92)

(PROGRAM = extproc)

)



(SID_DESC =

(ORACLE_HOME = C:\oracle\ora92)

(SID_NAME = chryslermdb)

(PROGRAM = HSODBC)

)

)

In the above SID_NAME ChryslerMDB is the SID name that we have given for non-oracle database (e.g. MS-ACCESS or MS-SQL). Once this is done start the Listener by executing the following command at the command line .

C:\Documents and Settings\impactadm\Desktop>lsnrctl start listener_msaccess

LSNRCTL for 32-bit Windows: Version 9.2.0.7 - Production on 15-AUG-2007 13

55

(c) Copyright 1998 Oracle Corporation. All rights reserved.

Starting tnslsnr: please wait...

Then the following message will be seen :

STATUS of the LISTENER

------------------------

Alias listener_msaccess

Version TNSLSNR for 32-bit Windows: Version 9.2.0.7 - Pr

tion

Start Date 15-AUG-2007 13:05:56

Uptime 0 days 0 hr. 0 min. 0 sec

Trace Level off

Security OFF

SNMP OFF

Listener Parameter File C:\oracle\ora92\network\admin \listener.ora

Listener Log File C:\oracle\ora92\network\log\listener_msaccess.log

Services Summary...

DEV has 1 service handler(s)

IMPACT02 has 1 service handler(s)

IMPORT_access has 1 service handler(s)

PLSExtProc has 1 service handler(s)

The command completed successfully

C:\Documents and Settings\impactadm\Desktop>

Note : If you are using the same listener, then just execute lsnrctl reload command at the command prompt. In case of windows machines, you can just go to SERVICES and restart the listener.

7. The next step is to update the TNSNAMES.ORA file.

Update the TNSNAMES.ORA file with the entries similar to the following :

#ACCESS DB for Chrysler

Chryslermdb.world =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS =

(COMMUNITY = tcp.world)

(PROTOCOL = TCP)

(Host = )

(Port = 1522)

)

)

(CONNECT_DATA = (service_name = chryslermdb)

)

(HS=ok)

)


1. Once the entire configuration is done, check the connectivity by doing a tnsping command.

2. The next step is to create a Database link to this ODBC connection. This can be done by as follows:


Login to the database and execute the following query :

CREATE DATABASE LINK TEST1 CONNECT USING ‘CHRYSLERMDB.WORLD’;

Here TEST1 is the DB Link name and CHRYSLERMDB.WORLD is the name of the DSN entry in the TNSNAMES.ORA file.

Once this is done, we can do any operations on the Non-oracle database by executing the commands from Oracle database. For e.g.

SELECT * FROM DUAL@TEST1;

Note : We have to make sure that the SID name in the TNSNAMES.ORA and LISTENER.ORA and the one in the USING clause of the Create Database Link matches, otherwise we will not be able to connect to the database and will get some errors. Also, the init.ora file at the %ORACLE_HOME%\ oracle\ora92\hs\admin should be created with the correct SID failing which will lead to errors.

*****************************************************************************************

read more......

State of the Art in Visual Database Development

Whether you are in need of a data mining application but never designed a database before, or simply want to save time on designing and managing a new database, your choices of database development tools are virtually limitless. FoxPro, Microsoft Access, FileMaker and many other tools are available at more or less reasonable prices.

But what if you have never done it before? Or what if you have no idea what the final product will look like, but need to get it working right away? Or what if you simply want to save time and efforts designing a database application? In that case, you need something to allow database development to be as simple as possible.

A visual approach to software and database development is not new, but modern tools such as Microsoft Access still require certain experience in database development. Using those tools, you have to know exactly what you are doing and keep in mind what you are going to get. If your idea of a final product is loose, or if you've never done it before, these tools do not necessarily represent the best choice for you.

Luckily, there are other tools on the market that make it possible to create and maintain a database without knowing what's inside or how to do it. SlyDB (http://www.slydb.com/) is one of these tools. Designed to perform database development and maintenance in the most simple and visual way, SlyDB looks similar to Microsoft Outlook, a product familiar to most office users. With a familiar look and feel and a completely visual approach to database development, SlyDB allows anyone to design, launch and maintain a database quickly and efficiently with no prior experience whatsoever.

Despite the simple user interface, SlyDB is packed with features. Offering databases of any size and complexity, SlyDB allows for up to 4GB databases complete with all necessary forms, fields, formulas, multimedia features such as pictures and images, and interactive behavior thanks to support for emailing, scripting and reporting. Don't see a feature you need on this short list? Don't worry, there are more than twenty database fields and elements available to choose from, as well as over seventy types of scripting actions and more than thirty mathematical functions. With all these elements, there's no need to code anything by hand, ever. Just launch SlyDB and see how simple it is to design a new database!

Didn't put everything you wanted into the first release? No problem. Add or remove fields on the fly with SlyDB! No need to redesign your project or create a new database every time you need to make a change. Modify your database at any time, add or remove elements, or program a new behavior in a completely visual fashion. With SlyDB, you're just one click away from making a perfect database!

SlyDB represents a new generation of visual database development tools. With networking capabilities and flexible licensing policies allowing up to 16 developers sharing a single license, SlyDB also represents a great value for small offices and organizations. Just download your free evaluation copy (http://www.slydb.com/) to find out more about the product!

read more......

Database Design Basics by Nicholas Brown

As with any project, taking time to plan ahead now will save you and your business a lot of time down the road. A common mistake with database development is that the designer fails to think ahead. This usually leads to the development of a database that is unable to handle all the needs of the company. Unfortunately, once a database is created and implemented, it is very hard to go back and make changes. This is why planning ahead is so crucial. I have provided a few tips below that will help you to avoid these common mistakes. With a little planning and some hard work, your database will be able to work at its full potential.

Before even looking at your computer, sit down with a tablet of paper and brainstorm. Create a list of all the things that you want your database to do (i.e. inventory tracking, client contact, billing, shipping, etc.). Once you have done this, create a sub-list for each of the items you came up with from the previous step. This list should include any items or useful information that will need to be collected. For example, if you would like to track shipping, you will probably want to collect information such as: date shipped, method of shipping, price of shipping, shipping details such as weight and dimensions, date delivered, etc. Remember to think ahead to what you will want to do with this information later. This information, for example, can be used to estimate shipping costs for the following year based on the previous year’s costs.

Once you have completed your list, it’s time to begin the basic design of the database. At this point, many designers begin to think about form creation and reports. However, the most important step following brainstorming is in developing your tables. Once these tables have been created, they will be difficult to change later. For this reason, take your time and make sure you have everything you need.

The next step is to test your tables. Take information that you already have available to you (previous sales logs, etc.) and see if all the information can be placed somewhere that will be easy to access. This will ensure that you can at least collect what you have already been collecting.

As a final piece of advice, add a “comment” field to your tables so that notes can be made pertaining to the data. This is especially helpful when trying to look back at information later. A simple comment can help to clarify information that might otherwise be very difficult to understand.

read more......

Joins in Sql-server

Just open SQL-Server and start typing the following(for spoon feeding u can also Copy and Paste):

What are Joins- Joins retrieve combination of two tables on the basis of a desired match.

Types of Joins-

* Inner(Equi Join, Natural Join, Cross Join)
* Outer(Left Outer Join, Right Outer Join, Full Outer Join)
* Self Join
/* Note: #Join implicitly means Inner Join
#Outer Join must be supplied with either of left, right or full keyword */

Create First Table Department-

create table Department

(

DepartmentID int PRIMARY KEY,

DepartmentName varchar(50)

)

/* Here DepartmentID will be Primary Key */

Insert some values in Department Table-

insert into Department values(1,'HR')

insert into Department values(2,'Admin')

insert into Department values(3,'Establishment')

insert into Department values(4,'SoftwareDevelopment')

insert into Department values(5,'Clerical')

Create Second table Employee-

create table Employee

(

EmployeeID int PRIMARY KEY IDENTITY,

EmployeeName varchar(50),

DepartmentID int foreign key references Department(DepartmentID)

)

/* Here DepartmentID is foreign key */

Insert some values in EmployeeTable-

insert into Employee values('Jazz',2)

insert into Employee values('Mic',2)

insert into Employee values('Joe',3)

insert into Employee values('Sam',5)

insert into Employee values('Aby',5)

insert into Employee values('Jazz',3)

insert into Employee values('Rai',2)

insert into Employee values('Tarry',2)

insert into Employee values('Shally',1)

insert into Employee values('Akash',2)

select * from Department,Employee

where Department.DepartmentID=Employee.DepartmentID

/*The above query generates a a table containing all the records where DepartmentID in both the tables are same The same result can be obtained by performing inner join on both the tables*/

/*****INNER JOIN*****/

select * from Department INNER JOIN Employee

ON Department.DepartmentID=Employee.DepartmentID

/*****Equi Join*****/

select * from Department INNER JOIN Employee

ON Department.DepartmentID=Employee.DepartmentID

/*The above query also fetches the same result as Equijoin(or Theta Join) is necessarily similar to Inner Join operation but the difference lies in the fact that Inner Join may be done on the basis of any other predicate condition other than '='

See tne following Example for this*/

select * from Department INNER JOIN Employee

ON Department.DepartmentID>Employee.DepartmentID

/*****Natural Join*****/

select * from Department NATURAL JOIN Employee

/*The above query generates same result with the only difference that Natural join gives only one identical column*/

/*SQL Server doesnot support this syntax it fetches natural join of two tables by using INNER JOIN operation

Visit the link: http://blog.sqlauthority.com/2008/10/17/sql-server-get-common-records-from-two-tables-without-using-join*/

/*****Cross Join*****/

select * from Department CROSS JOIN Employee

/*Cross Join gives Cartesian Product -shows records if join condition is true or completely absent*/

/*****OUTER JOIN*****/

/*Doesnot necessarily contain all the matching records each record is retained even if no matching record exists*/

/*****Left Outer Join*****/

select * from Department LEFT OUTER JOIN Employee

ON Department.DepartmentID=Employee.DepartmentID

/* It gives all records from the first table even if there is no match in the second table For instance in this example there is no Employee working in Department 4 (Software development)*/

/*****Right Outer Join*****/

select * from Department RIGHT OUTER JOIN Employee

ON Department.DepartmentID=Employee.DepartmentID

/*It gives all records from the second table even if there is no matching record in the first. Here in this example all

Employee belong to some or the other Department*/

select * from Department FULL OUTER JOIN EmployeeON Department.DepartmentID=Employee.DepartmentID

/*It shows all the records from both the tables & fills with NULL where no matching occurs*/

/*****SELF JOIN*****/

/*Applied to the same table*/

select A.EmployeeID,A.EmployeeName,B.EmployeeID,B.EmployeeName,A.DepartmentID

from Employee A,Employee B

where A.DepartmentID=B.DepartmentID

/*The above query retrieves combinatios of all Employees working in same Department from the Employee table*/

That's all with joins!!

Shees Abidi,

Syntel Ltd.,



read more......

Partitioning Using Dbms Redefinition

Oracle Table Partitioning

1 Introduction

Oracle provides a good feature called Partitioning which enables dividing an Object (table or an Index) into smaller pieces based on particular fields or values. This helps us in maintaining and administering the tables and indexes in a better way when compared to the normal tables. With the help of the partitioning, the availability of the Database objects and the Performance of the DML or DDL operations on those objects can be improved. In the partitioning concept, a single table or an Index is divided into multiple parts of smaller pieces called Partitions. Each of the partition can be given a different name and consider as a part of the object. By this, DBA tasks can be performed on these partitions as a whole or individually also. E.g. backing up a particular portion of the table. Partitioning can also improve the performance of multi-table joins, by using a technique known as partition-wise join. The Oracle Partitioning is an additional feature that is available when Oracle Server is installed. This feature is available only with the Enterprise edition of the Oracle server. If Partitioning is not enabled, then you will face the following error when trying to partition:


ORA-00439: feature not enabled: Partitioning

1.1 Advantages

* Query subsets of data.
* Partitions usually provide enhanced performance when accessing large tables.
* Table reorganizations can be done on a partition level.
* Reduce downtime for scheduled maintenance.
* Reduce downtime due to data failure.
* I/O performance.

1.2 Evolution of Partitioning in Oracle

* Oracle 8 à Range
* Oracle 8i à Range, Hash, Range-Hash
* Oracle 9i à Range, Hash, List, Range-Hash
* Oracle 9i Release 2 à Range, Hash, List, Range-List, Range-Hash

1.3 Decision to Partition Tables

The main thing for an Oracle DBA or a developer is to take a decision on whether to partition a particular table or not. Here are some tips on making a decision for the same:

* For “large” tables i.e. for tables >= 2 Gigs
* If Performance gain outweighs the management of partitioning.
* If Archiving of data is on a schedule and repetitive.

Tip: SQL to identify the size of a table

SELECT B.OWNER,

B.TABLESPACE_NAME,

B.TABLE_NAME,

ROUND (SUM (BYTES) / 1024 / 1024 / 1024, 6) GIGS

FROM SYS.DBA_EXTENTS A,

SYS.DBA_TABLES B

WHERE ((B.TABLESPACE_NAME = A.TABLESPACE_NAME)

AND

(B.OWNER = UPPER ('&OWNER')) AND (B.TABLE_NAME = '&TABLE')
)

GROUP BY B.OWNER, B.TABLESPACE_NAME, B.TABLE_NAME;

2 Partitioning methods
2.1 Range Partitioning

Range partitioning was the first partitioning method supported by Oracle in Oracle 8. Range partitioning was probably the first partition method because data normally has some sort of logical range. For example, business transactions can be partitioned by various versions of date (start date, transaction date, close date, or date of payment). Range partitioning can also be performed on part numbers, serial numbers or any other ranges that can be discovered.

The below shown syntax can be used to implement Range Partitioning:

Example1: Range partition example using a single tablespace.

CREATE TABLE EMP (EMPNO NUMBER (7), NAME VARCHAR2 (50), DESIGNATION VARCHAR2 (10), SALARY NUMBER (9, 3)

PARTITION BY RANGE (SALARY)

(PARTITION SAL_A VALUES LESS THAN (200000),

PARTITION SAL_B VALUES LESS THAN (100000),

PARTITION SAL_C VALUES LESS THAN (50000))

TABLESPACE USERS);

Example2: Range partition example using a Multiple tablespaces. This method provides better performance when table is very big.

CREATE TABLE EMP (EMPNO NUMBER (7), NAME VARCHAR2 (50), DESIGNATION VARCHAR2 (10), SALARY NUMBER (9, 3)

PARTITION BY RANGE (SALARY)

(PARTITION SAL_A VALUES LESS THAN (200000) TABLESPACE EMP1,

PARTITION SAL_B VALUES LESS THAN (100000) TABLESPACE EMP2,

PARTITION SAL_C VALUES LESS THAN (50000) TABLESPACE EMP3);

Important Note on Range Partitions: Range partitions are ordered by defining the lower and upper boundary for a specific partition. It is possible that partition size may differ substantially due to the amount of data that will be mapped to each specific partition. This may cause sub-optimal performance for certain operations like parallel DML. So through analysis of the data is required before deciding on the partition.

2.2 List Partitioning

List partitioning was added as a partitioning method in Oracle 9i Release 1. List partitioning allows for partitions to reflect real-world groupings (e.g. business units and territory regions). List partitioning differs from range partition in that the groupings in list partitioning are not side by side or in a logical range. List partitioning gives the DBA the ability to group together seemingly unrelated data into a specific partition.

The LIST_ME.SQL script provides an example of a list partition table. Note the last partition with the DEFAULT value. This DEFAULT value is new in Oracle 9i Release 2.

CREATE TABLE EMP (EMPNO NUMBER (7), NAME VARCHAR2 (50), DESIGNATION VARCHAR2 (10), SALARY NUMBER (9, 3)

PARTITION BY LIST (DESIGNATION)

(PARTITION DES_A VALUES (‘MANAGER’,’SENIOR MANAGER’),

PARTITION DES_B VALUES (‘ANALYST’,’SENIOR ANALYST’),

PARTITION DES_C VALUES (‘ENGINEER’,’SENIOR ENGINEER’))

TABLESPACE USERS);

2.3 Hash Partitoning

Hash partioning is the method in which the partioning is enabled or implemented by means of a hash range scan and it is best to be implemented in cases where in we do not have the list or range of values befor in-hand for a particular table. The following is the syntax that can be used for the same :

CREATE TABLE EMP

(EMPNO NUMBER (7),

NAME VARCHAR2 (50),

DESIGNATION VARCHAR2 (10),

SALARY NUMBER (9, 3)

PARTITION BY HASH (EMPNO)

(PARTITION STORE IN (P1,P2,P3,P4,P5)) TABLESPACE USERS);

In the above statement the parameter PARTITION number can be changed based on the performance or the throughput of the operations that are performed in the table.

2.4 Composite Range-Hash Partitioning

Composite range-hash partitioning combines both the ease of range partitioning and the benefits of hashing for data placement, striping, and parallelism. Range-hash partitioning is slightly harder to implement. But, with the example provided and a detail explanation of the code one can easily learn how to use this powerful partitioning method.

One suggestion is that when you actually try to build a range-hash partition table that you do it in the following steps:

1. Determine the partition key for the range.
2. Design a range partition table.
3. Determine the partition key for the hash.
4. Create the SUBPARTITION BY HASH clause.
5. Create the SUBPARTITION TEMPLATE.
6. Do Steps 1 and 2 first. Then you can insert the code created in Steps 3 –5 in the range partition table syntax.

CREATE TABLE DEMO ( ID NUMBER,
TXT VARCHAR2(50))
PARTITION BY RANGE (ID)
SUBPARTITION BY HASH (TXT)
SUBPARTITIONS 4 STORE IN (DATA01, DATA02)
(PARTITION KB_LO VALUES LESS THAN (0),
PARTITION KB_HI VALUES LESS THAN (100),
PARTITION KB_MX VALUES LESS THAN (MAXVALUE)
SUBPARTITIONS 2 STORE IN (DATA03));

2.5 Composite Range-List partitioning

Composite range-list partitioning combines both the ease of range partitioning and the benefits of list partitioning at the sub partition level. This is a combination of the Range and the List partitions. Like range-hash partitioning, range-list partitioning needs to be carefully designed. The time used to properly design a range-list partition table pays off during the actual creation of the table.

CREATE TABLE EMPDEMO
(EMPNO NUMBER(7),
NAME VARCHAR2(50),
DESIGNATION VARCHAR2(10),
SALARY NUMBER(9,3)
PARTITION BY RANGE (EMPNO)

SUBPARTITION BY DESIGNATION (
(PARTITION DES_A VALUES(‘MANAGER’,’SENIOR MANAGER’),
PARTITION DES_B VALUES(‘ANALYST’,’SENIOR ANALYST’),
PARTITION DES_C VALUES (ENGINEER,SENIOR ENGINEER))
TABLESPACE USERS);

3 Indexes for partitioned tables

The indexes for partitioned tables are of two types.

* Globally Partitioned Indexes
* Locally Partitioned Indexes

3.1 Globally Partitioned Indexes

There are mainly two types of Globally Partition Indexes available :

* Non-Partitioned
* Partitioned

Globally Non-Partitioned Indexes are “regular” indexes used in OLTP.

Globally Partitioned Indexes are similar in syntax to Range partitioned tables.

CREATE INDEX PARTITION_BY_RANGE_GPI
ON PARTITION_BY_RANGE (BIRTH_YYYY)

GLOBAL PARTITION BY RANGE (BIRTH_YYYY)

(PARTITION DOBS_IN_1971_OR_B4

VALUES LESS THAN (1972)

TABLESPACE ITS01,

PARTITION DOBS_IN_1972_GPI

VALUES LESS THAN (1973)

TABLESPACE ITS02,

. . .

PARTITION DOBS_IN_1975_OR_L8R

VALUES LESS THAN (MAXVALUE)

TABLESPACE ITS05);

3.2 Locally Partitioned Indexes

Locally partitioned indexes are for the most part very straightforward.

Extra time should be allocated when creating locally partitioned indexes on range-hash or range-list partitioned tables. There are a couple reasons that extra time is needed for this type of index. One of the reasons is a decision needs to be made on what the index will be referencing in regards to a range-hash or range-list partitioned tables. A locally partitioned index can be created to point to either partition level or sub partition level.

Maintenance of locally partitioned indexes is much easier than the maintenance of globally partitioned indexes. Whenever there is DDL activity on the underlying indexed table Oracle rebuilds the locally partitioned index.

This automatic rebuilding of locally partitioned indexes is one reason why most DBAs prefer locally partitioned indexes.

4 When to use which partitioning Method

There are five different table partitioning methods (range, hash, list, range-hash and range-list) and three for indexes (global non-partitioned, global partitioned and locally partitioned). The obvious question that comes to mind is: “When do I use which combination of table and index partitioning?” There is no concrete answer for that question. However, here are some general guidelines on mixing and matching table and index partitioning.

1. First, determine if you need to partition the table.
· Refer to the section 1.3 on “When To Partition Tables”.

2. Next, decide which table partitioning method is right for your situation.
· Each method is described under Section 2

3. Determine how volatile the data is.
· How often are there inserts, updates and deletes?

4. Choose your indexing strategy: global or local partitioned indexes.
· Each type has its own maintenance consideration.

These guidelines are good place to start when developing a partitioning solution.

5 Partitioning Existing tables using the dbms_redefinition package

The DBMS_REDEFINITION is an in-built PL/SQL package that is available with Oracle versions starting from version 9.This can be used in order to redefine the meaning of the tables and columns in a table.

The DBMS_REDEFINITION package can be created by executing the dbmshord.sql script available in the following location: $ORACLE_HOME/rdbms/admin/dbmshord.sql

There are different procedures available in the DBMS_REDEFINITION Package that is used for the implementation of the Partitioning concept:

dbms_redefinition.start_redef_table – This is used to Start the Redefinition Process.

dbms_redefinition.sync_interim_table – This procedure will synchronize the data between the source and the intermediate table.

dbms_redefinition.finish_redef_table - This is will do the final touch up of the redefinition process.

dbms_redefinition.can_redef_table – This is used to find whether we can redefine a table or not by using the Redefinition process.

The following are the steps that were carried out for implementing the redefinition process and the test results for the same :

5.1 Range Partition Process Explained

Environment Selected : DCSCAN Production Server.
Original Table Name: CESAR_VEHICLE_HISTORY

Size of the table : 2195 MB

No. Of Rows : 8.23M



For rest of our discussion in this example we use two tables.

* 1. “CESAR_VEHICLE_HISTORY” can be referred as Original Table
2. “CESAR_VEHICLE_HISTORY_INT” can be referred as Intermittent Table.

Step 1: Create an Intermittent table CESAR_VEHICLE_HISTORY_INT

This is an intermittent table and will be dropped later after the partitioning process has been completed. So the desired partitions have to be defined on this table. In this specific example we have decided to Range partition the table based on a filed called “TIMESTAMP_DT” and each partition contains data for a Quarter.

CREATE TABLE MPCI5171.CESAR_VEHICLE_HISTORY_INT
(
VEHICLE_NO_INT VARCHAR2(22 BYTE) NOT NULL,
TIMESTAMP NUMBER(21,7) NOT NULL,
ACTION VARCHAR2(4 BYTE),
VACTION VARCHAR2(4 BYTE),
ACTION_CONTROL_PURCHASE VARCHAR2(4 BYTE),
PURCHASING_STATUS_OLD VARCHAR2(4 BYTE),
PURCHASING_STATUS_NEW VARCHAR2(4 BYTE),
ACTION_CONTROL_SALES VARCHAR2(4 BYTE),
SALES_STATUS_OLD VARCHAR2(4 BYTE),
SALES_STATUS_NEW VARCHAR2(4 BYTE),
VEHICLE_LOCATION VARCHAR2(10 BYTE),
PERSON_CREATED_OBJECT VARCHAR2(12 BYTE),
ALLOCATION_REASON VARCHAR2(40 BYTE),
QUOTE_NO VARCHAR2(10 BYTE),
DEALERFRONTEND_USERID VARCHAR2(50 BYTE),
REASON_CODE VARCHAR2(3 BYTE),
CUSTOMER_NO VARCHAR2(10 BYTE),
SHIP_TO_PARTY VARCHAR2(10 BYTE),
DELIVERY_DATE_REQ DATE,
VEHICLE_USAGE VARCHAR2(3 BYTE),
CUSTOMER_PURCHASE_ORDERNO VARCHAR2(20 BYTE),
KERRIDGE_STOCK_NO VARCHAR2(18 BYTE),
END_CUSTOMER_NAME VARCHAR2(30 BYTE),
SALESPERSON VARCHAR2(10 BYTE),
CHARACTERISTIC_VAL1 VARCHAR2(120 BYTE),
CHARACTERISTIC_VAL2 VARCHAR2(120 BYTE),
CHARACTERISTIC_VAL3 VARCHAR2(120 BYTE),
CHARACTERISTIC_VAL4 VARCHAR2(120 BYTE),
CHARACTERISTIC_VAL5 VARCHAR2(120 BYTE),
CHARACTERISTIC_VAL6 VARCHAR2(120 BYTE),
CONFIGURATION_INT VARCHAR2(18 BYTE),
ADDRESS_NO VARCHAR2(10 BYTE),
CRM_REFERENCE VARCHAR2(18 BYTE),
SHIP_EST_ARRIVAL_DATE DATE,
FACTORD_PLANNED_FIN_DATE DATE,
EARLIER_VEHICLE_REQ VARCHAR2(1 BYTE),
NAME1 VARCHAR2(40 BYTE),
NAME2 VARCHAR2(40 BYTE),
NAME3 VARCHAR2(40 BYTE),
NAME4 VARCHAR2(40 BYTE),
VESSEL_NAME VARCHAR2(20 BYTE),
SHIP_VOYAGE_NO VARCHAR2(18 BYTE),
SHIP_BILL_OF_LADING_NO VARCHAR2(18 BYTE),
SHIP_EST_DEPARTURE_DATE DATE,
PORT_OF_DESTINATION VARCHAR2(10 BYTE),
KERRIDGE_USER VARCHAR2(50 BYTE),
ZZ_PLANNED_DELIVERY_TIME NUMBER(15),
VEH_MANUFACTURER_YEAR VARCHAR2(4 BYTE),
VEH_CONSTRUCTION_MONTH VARCHAR2(2 BYTE),
VEH_CONSTRUCTION_DAY VARCHAR2(2 BYTE),
VIN VARCHAR2(35 BYTE),
PLANNED_DELIVERY_TIME DATE,
PRODUCTION_TIME DATE,
ORDER_TIME DATE,
VEHICLE_LOCATION_FROM VARCHAR2(10 BYTE),
VEHICLE_LOCATION_FROM_TEXT VARCHAR2(30 BYTE),
VEHICLE_LOCATION_TO VARCHAR2(10 BYTE),
VEHICLE_LOCATION_TO_TEXT VARCHAR2(30 BYTE),
VEND_CRED_ACCOUNT_NO VARCHAR2(10 BYTE),
CARRIER_NAME VARCHAR2(35 BYTE),
COMMENTS VARCHAR2(40 BYTE),
PROMISED_TIME DATE,
TIMESTAMP_DT DATE,
COUNTRY_CODE VARCHAR2(4 BYTE),
VEHICLE_NO VARCHAR2(10 BYTE),
FACTORY_ORDER_DATE DATE,
DAYS_SINCE NUMBER(10),
TIMESTAMP_DT_WEEK VARCHAR2(10 BYTE),
DAYS_SINCE_FIRST NUMBER(10),
DAYS_SINCE_LAST NUMBER(10)
)


PARTITION BY RANGE (TIMESTAMP_DT)



(PARTITION CESAR_VEHICLE_HISTORY_2005Q4 VALUES LESS THAN (TO_DATE('01/01/2006', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2006Q1 VALUES LESS THAN (TO_DATE('31/03/2006', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2006Q2 VALUES LESS THAN (TO_DATE('30/06/2006', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2006Q3 VALUES LESS THAN (TO_DATE('30/09/2006', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2006Q4 VALUES LESS THAN (TO_DATE('31/12/2006', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2007Q1 VALUES LESS THAN (TO_DATE('31/03/2007', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2007Q2 VALUES LESS THAN (TO_DATE('30/06/2007', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2007Q3 VALUES LESS THAN (TO_DATE('30/09/2007', 'DD/MM/YYYY')),



PARTITION CESAR_VEHICLE_HISTORY_2007Q4 VALUES LESS THAN (TO_DATE('31/12/2007', 'DD/MM/YYYY')))



TABLESPACE MPCDATA



PCTUSED 0



PCTFREE 10



INITRANS 1



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



LOGGING;



/



Step 2: Check if the table can be partitioned



Execute the below statement to verify if the original table can be partitioned.



EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('MPCI5171', 'CESAR_VEHICLE_HISTORY');



The above step should not return any errors which means that we can go ahead.



Step 3: Start the Redefinition



BEGIN



DBMS_REDEFINITION.START_REDEF_TABLE(



UNAME => 'MPCI5171',



ORIG_TABLE => 'CESAR_VEHICLE_HISTORY',



INT_TABLE => 'CESAR_VEHICLE_HISTORY_INT');



END;



/



Step 4: Synchronize the tables



BEGIN



DBMS_REDEFINITION.SYNC_INTERIM_TABLE(



UNAME => 'MPCI5171',



ORIG_TABLE => 'CESAR_VEHICLE_HISTORY',



INT_TABLE => 'CESAR_VEHICLE_HISTORY_INT');



END;



/



Step 5: Create the constraints and Indexes on interim table as original Table with new names



CREATE INDEX MPCI5171.CES_VEHIHIST_P_NOINT_IDX01 ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VEHICLE_NO_INT)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE INDEX MPCI5171.CES_VEHIHIST_P_VACT_IDX ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VACTION)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE UNIQUE INDEX MPCI5171.CES_VEHI_HIST_P_PK ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VEHICLE_NO_INT, TIMESTAMP)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE INDEX MPCI5171.IX1CESAR_VEHICLE_HISTORY_P ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(ACTION)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE INDEX MPCI5171.IX2CESAR_VEHICLE_HISTORY_P ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VEHICLE_NO)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



ALTER TABLE MPCI5171.CESAR_VEHICLE_HISTORY_INT ADD (



CONSTRAINT CES_VEHI_HIST_PK_P PRIMARY KEY (VEHICLE_NO_INT, TIMESTAMP)



USING INDEX



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



));



Step 6: Gather Statistics on the new table



EXEC DBMS_STATS.GATHER_TABLE_STATS('MPCI5171', 'CESAR_VEHICLE_HISTORY_INT', CASCADE => TRUE);



Step 7: Finish Redefinition Process



BEGIN



DBMS_REDEFINITION.FINISH_REDEF_TABLE(



UNAME => 'MPCI5171',



ORIG_TABLE => 'CESAR_VEHICLE_HISTORY',



INT_TABLE => 'CESAR_VEHICLE_HISTORY_INT');



END;



/



At this point the interim table has become the "real" table and their names have been switched in the data dictionary. All that remains is to perform some cleanup operations



Step 8: Now drop the Interim table CESAR_VEHICLE_HISTORY_INT



DROP TABLE CESAR_VEHICLE_HISTORY_INT;



Step 9:Rename the Constraints and Indexes to match with the Original names



Alter INDEX CES_VEHIHIST_P_NOINT_IDX01 RENAME TO CES_VEHIHIST_T_NOINT_IDX01;



Alter INDEX CES_VEHI_HIST_P_PK RENAME TO CES_VEHI_HIST_T_PK;



Alter INDEX IX1CESAR_VEHICLE_HISTORY_P RENAME TO IX1CESAR_VEHICLE_HISTORY_T;



Alter INDEX IX2CESAR_VEHICLE_HISTORY_P RENAME TO IX2CESAR_VEHICLE_HISTORY_T;



Step 10: Once the Partitioning is implemented, the same can be tested by using the following SQL query :



SELECT PARTITIONED FROM USER_TABLES WHERE TABLE_NAME='CESAR_VEHICLE_HISTORY':



This will return an output saying YES which means that the original table is partitioned:



SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME = 'CESAR_VEHICLE_HISTORY';



The above query returns the names of the Partitions for the original table.


5.1.1 Test Results :



In the production environment we have a procedure named “CESAR_MARKETING_MEASURES”. This procedure uses the original table CESAR_VEHICLE_HISTORY in many select statements.



The Execution of the procedure took 5 hrs 31 min without Partitoning



Execution of the same procedure took 4 hrs 45 min after implementing RANGE Partitioning.


5.1.2 Conclusion



The test results reveal that implementing a Range partition on the filed TIMESTAMP_DT is not the right choice in the given scenario. It is simply because the exection time for the procedure “CESAR_MARKETING_MEASURES” has not improved much after implemneting the rage partition also. So we have continued our testing with hash partitioning and is explained in the next section.


5.2 Hash Partitioning Process Explained



Environment selected : DCSCAN Production.



Table : CESAR_VEHICLE_HISTORY



Size of the table : 2195 MB



No.Of Rows : 8.23M



Step 1: Create a Partitioned table CESAR_VEHICLE_HISTORY_INT



CREATE TABLE MPCI5171.CESAR_VEHICLE_HISTORY_INT



(



VEHICLE_NO_INT VARCHAR2(22 BYTE) NOT NULL,



TIMESTAMP NUMBER(21,7) NOT NULL,



ACTION VARCHAR2(4 BYTE),



VACTION VARCHAR2(4 BYTE),



ACTION_CONTROL_PURCHASE VARCHAR2(4 BYTE),



PURCHASING_STATUS_OLD VARCHAR2(4 BYTE),



PURCHASING_STATUS_NEW VARCHAR2(4 BYTE),



ACTION_CONTROL_SALES VARCHAR2(4 BYTE),



SALES_STATUS_OLD VARCHAR2(4 BYTE),



SALES_STATUS_NEW VARCHAR2(4 BYTE),



VEHICLE_LOCATION VARCHAR2(10 BYTE),



PERSON_CREATED_OBJECT VARCHAR2(12 BYTE),



ALLOCATION_REASON VARCHAR2(40 BYTE),



QUOTE_NO VARCHAR2(10 BYTE),



DEALERFRONTEND_USERID VARCHAR2(50 BYTE),



REASON_CODE VARCHAR2(3 BYTE),



CUSTOMER_NO VARCHAR2(10 BYTE),



SHIP_TO_PARTY VARCHAR2(10 BYTE),



DELIVERY_DATE_REQ DATE,



VEHICLE_USAGE VARCHAR2(3 BYTE),



CUSTOMER_PURCHASE_ORDERNO VARCHAR2(20 BYTE),



KERRIDGE_STOCK_NO VARCHAR2(18 BYTE),



END_CUSTOMER_NAME VARCHAR2(30 BYTE),



SALESPERSON VARCHAR2(10 BYTE),



CHARACTERISTIC_VAL1 VARCHAR2(120 BYTE),



CHARACTERISTIC_VAL2 VARCHAR2(120 BYTE),



CHARACTERISTIC_VAL3 VARCHAR2(120 BYTE),



CHARACTERISTIC_VAL4 VARCHAR2(120 BYTE),



CHARACTERISTIC_VAL5 VARCHAR2(120 BYTE),



CHARACTERISTIC_VAL6 VARCHAR2(120 BYTE),



CONFIGURATION_INT VARCHAR2(18 BYTE),



ADDRESS_NO VARCHAR2(10 BYTE),



CRM_REFERENCE VARCHAR2(18 BYTE),



SHIP_EST_ARRIVAL_DATE DATE,



FACTORD_PLANNED_FIN_DATE DATE,



EARLIER_VEHICLE_REQ VARCHAR2(1 BYTE),



NAME1 VARCHAR2(40 BYTE),



NAME2 VARCHAR2(40 BYTE),



NAME3 VARCHAR2(40 BYTE),



NAME4 VARCHAR2(40 BYTE),



VESSEL_NAME VARCHAR2(20 BYTE),



SHIP_VOYAGE_NO VARCHAR2(18 BYTE),



SHIP_BILL_OF_LADING_NO VARCHAR2(18 BYTE),



SHIP_EST_DEPARTURE_DATE DATE,



PORT_OF_DESTINATION VARCHAR2(10 BYTE),



KERRIDGE_USER VARCHAR2(50 BYTE),



ZZ_PLANNED_DELIVERY_TIME NUMBER(15),



VEH_MANUFACTURER_YEAR VARCHAR2(4 BYTE),



VEH_CONSTRUCTION_MONTH VARCHAR2(2 BYTE),



VEH_CONSTRUCTION_DAY VARCHAR2(2 BYTE),



VIN VARCHAR2(35 BYTE),



PLANNED_DELIVERY_TIME DATE,



PRODUCTION_TIME DATE,



ORDER_TIME DATE,



VEHICLE_LOCATION_FROM VARCHAR2(10 BYTE),



VEHICLE_LOCATION_FROM_TEXT VARCHAR2(30 BYTE),



VEHICLE_LOCATION_TO VARCHAR2(10 BYTE),



VEHICLE_LOCATION_TO_TEXT VARCHAR2(30 BYTE),



VEND_CRED_ACCOUNT_NO VARCHAR2(10 BYTE),



CARRIER_NAME VARCHAR2(35 BYTE),



COMMENTS VARCHAR2(40 BYTE),



PROMISED_TIME DATE,



TIMESTAMP_DT DATE,



COUNTRY_CODE VARCHAR2(4 BYTE),



VEHICLE_NO VARCHAR2(10 BYTE),



FACTORY_ORDER_DATE DATE,



DAYS_SINCE NUMBER(10),



TIMESTAMP_DT_WEEK VARCHAR2(10 BYTE),



DAYS_SINCE_FIRST NUMBER(10),



DAYS_SINCE_LAST NUMBER(10)



)



PARTITION BY HASH(VEHICLE_NO_INT)



PARTITIONS 20



TABLESPACE MPCDATA



PCTUSED 0



PCTFREE 10



INITRANS 1



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



LOGGING;



/



In the above example Hash partitioning approach is selected with number of partitions as 20 and all the partitions resides on the one table space named “MPCDATA”. If the table size is too big then it is a good idea to span each partition on a separate table space.



Step 2: Check if the table can be partitioned



EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('MPCI5171', 'CESAR_VEHICLE_HISTORY');



The above step should not returns errors which means that we can go ahead.



Step 3: Start the Redefinition



BEGIN



DBMS_REDEFINITION.START_REDEF_TABLE(



UNAME => 'MPCI5171',



ORIG_TABLE => 'CESAR_VEHICLE_HISTORY',



INT_TABLE => 'CESAR_VEHICLE_HISTORY_INT');



END;



/



Step 4: Synchronize the tables



BEGIN



DBMS_REDEFINITION.SYNC_INTERIM_TABLE(



UNAME => 'MPCI5171',



ORIG_TABLE => 'CESAR_VEHICLE_HISTORY',



INT_TABLE => 'CESAR_VEHICLE_HISTORY_INT');



END;



/



Step 5: Create the constraints and Indexes as Original Table with new names



CREATE INDEX MPCI5171.CES_VEHIHIST_P_NOINT_IDX01 ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VEHICLE_NO_INT)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE INDEX MPCI5171.CES_VEHIHIST_P_VACT_IDX ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VACTION)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE UNIQUE INDEX MPCI5171.CES_VEHI_HIST_P_PK ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VEHICLE_NO_INT, TIMESTAMP)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE INDEX MPCI5171.IX1CESAR_VEHICLE_HISTORY_P ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(ACTION)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



CREATE INDEX MPCI5171.IX2CESAR_VEHICLE_HISTORY_P ON MPCI5171.CESAR_VEHICLE_HISTORY_INT



(VEHICLE_NO)



LOGGING



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



BUFFER_POOL DEFAULT



)



NOPARALLEL;



ALTER TABLE MPCI5171.CESAR_VEHICLE_HISTORY_INT ADD (



CONSTRAINT CES_VEHI_HIST_PK_P PRIMARY KEY (VEHICLE_NO_INT, TIMESTAMP)



USING INDEX



TABLESPACE MPCINDEX



PCTFREE 10



INITRANS 2



MAXTRANS 255



STORAGE (



INITIAL 1M



MINEXTENTS 1



MAXEXTENTS 2147483645



PCTINCREASE 0



));



Step 6: Gather Statistics on Interim table



EXEC DBMS_STATS.GATHER_TABLE_STATS('MPCI5171', 'CESAR_VEHICLE_HISTORY_INT', CASCADE => TRUE);



Step 7: Finish Redefinition Process



BEGIN



DBMS_REDEFINITION.FINISH_REDEF_TABLE(



UNAME => 'MPCI5171',



ORIG_TABLE => 'CESAR_VEHICLE_HISTORY',



INT_TABLE => 'CESAR_VEHICLE_HISTORY_INT');



END;



/



At this point the interim table has become the "real" table and their names have been switched in the data dictionary. All that remains is to perform some cleanup operations



Step 8: Now drop the Interim table CESAR_VEHICLE_HISTORY_INT



DROP TABLE CESAR_VEHICLE_HISTORY_INT;



Step 9:Rename the Constraints and Indexes to match with the Original names



ALTER INDEX CES_VEHIHIST_P_NOINT_IDX01 RENAME TO CES_VEHIHIST_T_NOINT_IDX01;



ALTER INDEX CES_VEHI_HIST_P_PK RENAME TO CES_VEHI_HIST_T_PK;



ALTER INDEX IX1CESAR_VEHICLE_HISTORY_P RENAME TO IX1CESAR_VEHICLE_HISTORY_T;



ALTER INDEX IX2CESAR_VEHICLE_HISTORY_P RENAME TO IX2CESAR_VEHICLE_HISTORY_T;



Step 10: Once the Partitioning is implemented, the same can be tested by using the following SQL query :



SELECT PARTITIONED FROM USER_TABLES WHERE TABLE_NAME='CESAR_VEHICLE_HISTORY':

This will return an output saying YES which means that the original table is partitioned:

SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME = 'CESAR_VEHICLE_HISTORY';

The above query returns the names of the Partitions for the original table.

5.2.1 Test Results

In the production environmen

read more......

Friday, December 26, 2008

Sap Data Warehousing Solution

SAP BW is a continuous data warehousing solution that uses former SAP technologies. This SAP BW is built on the Basis of 3-tier architecture and coded in the ABAP (Advanced Business Application Programming) language. This 3-tier architecture and code language uses ALE (Application Link Enabling) and BAPI (Business Application Programming Interface) to link BW with SAP systems and non-SAP systems.

BW Architecture

The BW has three layers in it. The top layer is the reporting layer. This top layer may be BW Business Explorer (BEx) or a third-party reporting device. This BEx consists of two components: one is BEx Analyzer and other is BEx Browser.

BW Server is a middle layer that carries out three tasks: it administrates the BW system, stores data and retrieves the data. In The bottom layer it consists of source systems, which may be R/3 systems, BW systems, flat files, and other systems. In the source systems a SAP component called Plug-In must be installed. It contains extractors. An extractor is a set of ABAP programs, database tables, and other objects that BW uses, which helps to extract data from the SAP systems. This BW Server contain Administrator Workbench, Metadata Repository and Metadata Manager, Staging Engine, PSA, ODS and User Roles.

This Administrator Workbench checks metadata and all BW objects. It has two components: one is BW Scheduler and other is BW Monitor. This component helps to load data and to monitor the data.

This Metadata Repository contains information relating to data warehouse. Metadata Repository contains two types one is business-related and other is technical. Metadata Manager is used to maintain Metadata Repository.

PSA (Persistent Staging Area) is also a BW server. This PSA stores data in the original format while being imported from the source system. It ensures quality check of data before they are loaded in their destinations, such as ODS Objects or Info Cubes.

This ODS (Operational Data Store) Objects helps to build a multilayer structure for operational data reporting. It is used for detail reporting.

Info Cubes is an actual table and they are the associated dimension tables in a star schema.

The OLAP Processor is the analytical processing engine. It analyzes and retrieves data as per users' requests.

Documents are stored in BDS (Business Document Services). The documents can appear in different formats like Microsoft Word, Excel, PowerPoint, PDF, and HTML.

BW Business Content

The BW's most powerful selling is Business Content. It contains standard reports and other associated objects. For standard reports, BW use a function called Generic Data Extraction. This function is used to extract R/3 data.

Nowadays, BW is rapidly evolving. It helps to plan BW projects and their scopes.

This sap e-business consists of three components: they are my SAP Technology, my SAP Services and my SAP Hosted Solutions.

MySAP Technology provides an infrastructure for Web Application Server and for process-centric collaboration. This infrastructure contains a component called mySAP Business Intelligence.

Another type of services called mySAP Services are the best services which support SAP offers to the customers. They offer for business analysis, technology implementation, and training to system support.

This mySAP Hosted Solutions are the outsourcing services of SAP. With the help of this solution, customers need not want to maintain physical machines and networks.
read more......

Database Basics

Databases are valuable resources that enable a myriad of business tools and practices, such as customer relationship management, efficiency analyses and projection reports, to name a few. Still, many people have difficulty conceptualizing what exactly a database is. A database is an organized collection of often vast amounts of related information that can be readily accessed and used as input to a range of applications that may present new or deeper insight into the business or enable new competitive advantages. In one sense databases can be likened to spreadsheets, containing fields and records of related but separate information that can be accessed for analysis. In fact databases and spreadsheets can in some cases be used in lieu of one other, but spreadsheets mainly answer inquiries about numerical data while databases are likely to contain more complex items such as images, dates, links, text, and numbers. The range of information that can be contained within and accessed by databases makes them indispensable for business data retention, compilation and interpretation.

To understand a database, first understanding some basic database related terminology is helpful. A field is a single aspect of the data contained within the database, while a record is the collection of data for one item in the database. Assume we are creating an employee information database that will contain employee names, addresses and telephone numbers. The employee name ‘Rebecca Smith’ could theoretically be contained within a field, while Rebecca’s name, address and telephone number would constitute one complete record. The single collection of all of the employees’ contact information represents a table, and multiple tables may be linked to other related tables, creating a relational database.

Planning is an important stage in database development to ensure that necessary information can easily be retrieved and utilized. Assume our employee information database above contains the first and last name of each employee in a field designated for employee name, with the complete address stored in the address field. Now suppose we need to create an alphabetical list of all employees by last name who live within ten miles of the office. For a well planned database, this would be a basic task. But for this example, how could the database distinguish between first name and last name, and how would it interpret the jumble of information we call ‘address’? It wouldn’t, which is what makes database planning so important. By anticipating how information would need to be retrieved in the future, we could have eliminated this problem by structuring separate fields for first name, last name, street address, city, state and zip code.

Databases do not only store information, they typically provide an interface through which users can collect and analyze information with ease. Most databases provide users with a means of establishing the data structure, form tools for easy data entry, a query engine which allows us to request information from the database and a report function for outputting the results of queries.

For businesses, information is power. Databases give companies unprecedented means of retrieving and analyzing important data that allows them to continuously improve and streamline processes for increased efficiency and ultimately, increased profitability.
read more......

Sql Tuning

Companies that rely upon database information obtained by SQL queries commonly encounter performance issues as their databases grow to contain mass amounts of information. Over time, the same processes that have historically proven successful may become inadequate when the application needs to accommodate thousands or millions of database records. Suppose a company has always utilized a process in which a SQL query looks for a name by checking the first row to see if it holds the specified name, and if not, checking the second row for the name, and so on. When this company’s database held only fifty names, this method would have been perfectly efficient and acceptable. But how would the same query go about locating the same name once the company’s database has expanded to contain a thousand names? What about twenty million? The query would use the same system and likely get the job done, but not very efficiently. The CPU usage and response time would be compromised as the query was forced to navigate through mass amounts of information seeking one single record.

Database expansion poses challenges to developers and business owners whose profitability relies on optimal software performance. When an indefinite amount of information is expected to be added to a database, it becomes apparent that fine tuned SQL statements are necessary to minimize CPU usage and response time, which is where SQL tuning comes in.

SQL tuning involves streamlining the process through which SQL queries locate the sought after information. Innovative enterprise data availability companies have developed software solutions with extensive capabilities which help analyze the performance of the code and automate the process of identifying the specific applications that are causing performance delays, with the goal of reducing the time and CPU usage required to implement a new application. Such technology allows developers to analyze, test and correct the performance of new or existing applications in a test environment without affecting production. Such technology also allows developers to identify what specific issues are having the greatest effect on performance, so that they may focus their energy on correcting the problems with the highest priority.

SQL tuning technology targets and significantly improves the performance of SQL applications, in turn increasing QA production and reducing the need for time-consuming manual testing. It also provides developers with a tool to help analyze and maximize the efficiency of their applications during development, thus increasing productivity as well as enhancing the quality of their work.
read more......

Importance of Database Uptime

For many businesses, logging, warehousing and processing information about transactions is the lifeline of their corporate strategy and crucial to their profitability. Important records detailing a company’s user history, product inventory and shipment tracking, supplier information, configuration settings, or any other necessary collections of information are most often stored in and retrieved from databases. Databases provide a convenient means of storing vast amounts of information, allowing the information to be sorted, searched, viewed, and manipulated according to the business needs and goals. Many companies rely so heavily on the functions of databases that their daily business operations can not be executed if databases are unavailable, making database management and maintenance a vital component of their business models.

The significance of database up time and hazard of downtime can best be illustrated with a hypothetical example. Suppose ABC Company is a subscription based web application that provides a variety of on demand services to its subscribers. Every piece of data that ABC Company uses to provide its subscription services is stored in one or more massive databases, and fulfilling user requirements relies on the website’s ability to access, format and deliver database information almost instantly. However, if that database is undergoing maintenance, queries cannot access the required information that is needed to create a deliverable, thus depriving the user of the services for which he or she is paying. On a large scale, database downtime can result in lost clients and sales, and thus damage to the profitability and success of the business. Hence, efficient database management capabilities are crucial to the mere existence of many businesses.

The need for database maintenance is unavoidable, so enterprise data availability software solutions have been created to help businesses reduce downtime from hours or days to mere minutes or even seconds. Effective database management applications can reduce or eliminate downtime that renders a database unavailable, giving business owners and developers a flexible and powerful tool to protect the performance of their valuable business operations.

Companies which provide enterprise data availability software and services help businesses manage their databases by offering services such as backup and recovery, automation of maintenance tasks, and fine tuning performance efficiency, among others. With an assessment of current database maintenance practices, enterprise data availability companies can recommend the appropriate system to implement that can solve the database management shortfalls of most organizations, playing a valuable role in the protection and longevity of their clients.
read more......

What is Etl & Datawarehouse?

You probably heard about datawarehouses & analysis tools in the past in database related discussions or in job meetings.
The fact is that everyday more and more companies are starting to think (and build) data warehouses or other data analysis and statistical tools.
It is no more a large-companies-only matter; today with the right knowledge and expertise (or a little education) small companies can also make analysis and get valuable information to boost sales or revenues.

So what is a datawarehouse?

In simple words, a Datawarehouse is a common repository (a database, for simplicity) of information about a company’s activities and operations.
This means, all your company’s transactions such as sales, payments or acquisitions end up in the data warehouse.
This “database” is a technical product or platform that allows us to “ask” real life business questions such as “which branch sold more products this month?” or “who is my top performing salesman?”
You may be thinking “I currently HAVE a database which records sales/transactions/movements of my company everyday”.
Yes, of course you have one of those. But that is a transactional database. This means that this database is heavily used everyday to store our company’s operations, and because of this we cannot use it for analysis.
Also, this database holds data, not information.
A typical example for a record would be “Qty:1 Product_code: Shoes A Code_Branch: South …”. This is not meaningful for a report containing information, not only raw data that can not answer complex business questions which would allow us to make a manager decision.
These and a few other points are the key reasons to build a datawarehouse to do analysis.

So how I move or copy the data from my everyday transactional database to my datawarehouse?

Here is where ETL comes to play.
ETL is the process for Extracting, Transforming and Loading data from one database to another.
There are several ways for doing this, from coding your own processes to the more often used way of implementing ETL tools.
These ETL tools can do the job very well, and if chosen wisely can save you a lot of coding efforts and money, since you can graphically build processes and in most cases without knowing how to program for databases.

There are a lot of ETL tools in the market right now. As and advice, I suggest you to invest some research time (and testing if possible) before choosing the one that suits your company’s needs.
They can range from open source free tools to high price commercial tools. Neither of them is perfect in every situation, and you will have to take into account your data volumes, the analysis and answers you want from your datawarehouse, and the periodicity needed of those answers, among other aspects.

More info: http://www.etlreviews.com
read more......

Book Review: Beginning Sql Server 2005 Programming

SQL Books dont get better than this for ASP.NET companies such as ourselves!

As our company expands, I needed a good reference for old and new developers alike. This book ensures it can handle experienced programmers and at the same time appeal to first timers aswell.

As you may know, one of my main gripes from new staff is the lack of training colleges and universities give. SQL is a prime example of this. Say T-Enterprise employ 3 graduates, I can gurantee two of them would not even know how to login to an SQL server never mind building complex ASP.NET applications using it!

Couple in the fact that we do not have the time to train staff in the most basic of operations and then this book is a godsend!

No IT firm should be without this!

On a lighter note - the author, Robert Viera, is extremely casual in his teachings however most of all, his smug face on the cover gives the impression that you are learning from the best!

Book Description
* After a quick primer on database design basics and the SQL query language (for those programmers who may be building their first database application), this book provides an overview of SQL Server itself, which has been dramatically redesigned with the 2005 release
* Once readers have grasped the fundamentals of database design and SQL concepts, they will then learn how to implement those concepts with Microsoft SQL Server 2005
* Addresses creating and changing tables, managing keys, database normalization, writing scripts, working with stored procedures, programming with XML, and using SQL Server reporting and data transformation services
* The companion Web site provides all of the code found in the book

Synopsis
After a quick primer on database design basics and the SQL query language (for those programmers who may be building their first database application), this book provides an overview of SQL Server itself, which has been dramatically redesigned with the 2005 release. Once readers have grasped the fundamentals of database design and SQL concepts, they will then learn how to implement those concepts with Microsoft SQL Server 2005. Addresses creating and changing tables, managing keys, database normalization, writing scripts, working with stored procedures, programming with XML, and using SQL Server reporting and data transformation services. The companion Web site provides all of the code found in the book.
read more......

Communicate your Database's Capabilities More Effectively With Ms Visio

If you own a company, then you no doubt can appreciate that keeping your databases current and fully functional are key to keeping your business up and running at full steam. That record of data or records comes in handy for your sales department, and any number of different divisions that work together to keep your company running at optimal levels. When it comes time to prepare documents, presentations or reports that describe all of the wonderful functions that your databases perform, you will want to incorporate Visio database diagrams to make all of your business documents easier to understand and more visually exciting to read.

When you have several people working together as a team on a project that involves various databases, it makes sense to communicate the information that the databases provide effectively, so that the group can work together efficiently. Microsoft visio can help with this important task. For your database to be completely useful, you will need to be able to not only manage it but also to have the capability to ask questions and set up scenarios using the data from the database. Visio network files can help you picture how all of those different processes function, in an easy-to-understand visual manner. Flow and work charts, graphs and other diagrams can be created on a visio template page that utilizes Visio shapes to create the images that best describe your database's functions.

If you are responsible for training new employees that will be working with your company's databases, you need a quick and easy way to let them know about each database's function and how they connect and relate to each other. Visio diagrams can make that happen, and easier than you might have thought possible. With pre-installed templates, you can create diagrams on your own. If you require quite detailed pictures to describe your databases' functions, customization of Visio can make that possible. Visio developers can consult with you to determine your specific business's needs, and then create the shapes that will make your descriptions of your databases specific, and therefore more useful.

You can also utilize Visio in conjunction with other Office software that you rely upon, such as Excel, and convert the data in your Excel spreadsheet into a visual representation of that information through the connectivity of Visio. These diagrams can also be used in an analytic fashion, to help you notice trends and take timely action based on those trends that can improve the usefulness of your databases. You can also update your diagrams as you have new information to import, and this can help you keep track of your databases' quality and integrity, both crucial factors upon which your databases' depend.
read more......

Useful Tools for Btrieve, Pervasive.sql, and Other Sql Server

Now you can find the useful and inexpensive software working with Btrieve, Pervasive SQL, mySQL, Firebird SQL and more. Next popular and inexpensive system working on the platforms NetWare, Windows, DOS.

Database Manager for Pervasive SQL Version 2.1 will allow to get a full supervision on your database Pervasive SQL. Easy way to operate a database using SQL Script. MDI interface, management of all objects of a database - tables, views, procedures, relations, triggers and users . The Wizard of Export and Import will transfer your, given,in other applications through ODBC.

New ! Now you can change the dictionary DDF dynamically. It is free ! Download library here . Want custom-made library? Inform about it.

Also today subsequent programs are offered to your attention :

Btrieve Grid Control - will allow quickly to create the programs which will edit the data in files Btrieve.You can edit your files in a grid or form, search data etc.The version is accessible for uses in VC and VB.

DDF Editor for Btrieve - Allows to create dictionaries of the description of datas, create, view, edit, export and import Btrieve files. Version 2.0 supports all types of the data Pervasive SQL . Also DDF Editor support Btrieve 6.15 for Windows, NetWare and previous versions for NetWare. If you have only server NetWare, this program will work. As additional bonus you get API (Visual C++) for the access from 32-bit application to 16-bit client Btrieve.

Database Manager - will allow to get a full supervision, management and administration on your database Pervasive SQL. You will easy control users, groups by privileges on tables and column, create and edit procedures and triggers, create and edit your tables and view , import and export datas and descriptions from any source ODBC. You may execute SQL scripts. Database Manager not limited use by Pervasive SQL. Many possibility of this program applicable to any ODBC Source.

The program BtrOle - Server OLE Automation, will make sure immediate access to data Btrieve from Excel, Visual Basic.

Unix Client for Btrieve - Will set aside to have rapid access to Btrieve from any Unix systems. User manager allow access only to the chosen users and hosts.

Read more about it

http://www.cuvashi.com/business/database/
read more......