Pivot table with Oracle SQL

Often, we store data in a normalized way in our database, perhaps something that resembles key-value pairs. To illustrate, let us see the following table, which describes properties of computers.

select comp_id, comp_property, comp_property_value from schema1.comp_properties;


COMP_ID COMP_PROPERTY COMP_PROPERTY_VALUE
------- ------------- -------------------
1       hostname      comp1
1       ipaddress     192.168.0.1
1       os            Windows XP SP2
1       hdd1          60gb
1       hdd2          500gb
2       hostname      comp2
2       ipaddress     192.168.0.2
2       os            MacOS 10.5
2       hdd1          80gb
3       hostname      comp3
3       os            Ubuntu Linux 7.10

In this particular case, we can see that the three computers have different properties. While in this example of three computers we can see the information easily, we can also imagine that it will become cumbersome if we are managing 100 computers instead of only three. Thus, a pivot table type of presentation would be much more useful.

To rewrite the SQL statement to present the same data in a pivot table manner, we can do the following.

create or replace schema1.v_comp_properties
select comp_id, hostname, ipaddress, os, hdd1, hdd2
from (
	select comp_id,
	max(case when comp_property='hostname' then comp_property_value else null end) as hostname,
	max(case when comp_property='ipaddress' then comp_property_value else null end) as ipaddress,
	max(case when comp_property='os' then comp_property_value else null end) as os,
	max(case when comp_property='hdd1' then comp_property_value else null end) as hdd1,
	max(case when comp_property='hdd2' then comp_property_value else null end) as hdd2
	from schema1.comp_properties
	group by comp_id
);

The result should look like the following.

select comp_id, hostname, ipaddress, os, hdd1, hdd2 from schema1.v_comp_properties;

COMP_ID HOSTNAME IPADDRESS   OS                HDD1 HDD2
------- -------- ----------- ----------------- ---- -----
1       comp1    192.168.0.1 Windows XP SP2    60gb 500gb
2       comp2    192.168.0.2 MacOS 10.5        80gb
3       comp3    192.168.0.3 Ubuntu Linux 7.10

With this pivot table, we will be able to see all properties of each computer on each row easily, while still maintain a normalized database design.

Writing SQL output to CSV with VBScript

Enter your specific information in the “Configuration” section near the top of the script. For the “dbType” variable, the only accepted values are “oracle”, “sqlserver”, or “mysql”. Once this is done, just run the script and you should have your quote-delimited comma-separated CSV file!

The email-related variables are optional. To enable the emailing functionality (send the generated CSV file to the address as an attachment), enter the recipient email address in the box. If you do not wish to email, just leave that variable as empty string (“”), and the other email related variable such as smtp and smtpPort will be ignored.

'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
' Data Exporter                                                 '
'                                                               '
' Description: Allows the output of data to CSV file from a SQL '
'       statement to either Oracle, SQL Server, or MySQL        '
' Author: C. Peter Chen, http://dev-notes.com                   '
' Version Tracker:                                              '
'       1.0   20080414 Original version                         '
'	1.1   20080807 Added email functionality                '
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
option explicit
dim dbType, dbHost, dbName, dbUser, dbPass, outputFile, email, subj, body, smtp, smtpPort, sqlstr

'''''''''''''''''
' Configuration '
'''''''''''''''''
dbType = "oracle"                 ' Valid values: "oracle", "sqlserver", "mysql"
dbHost = "dbhost"                 ' Hostname of the database server
dbName = "dbname"                 ' Name of the database/SID
dbUser = "username"               ' Name of the user
dbPass = "password"               ' Password of the above-named user
outputFile = "c:\output.csv"      ' Path and file name of the output CSV file
email = "email@me.here"           ' Enter email here should you wish to email the CSV file (as attachment); if no email, leave it as empty string ""
subj = "Email Subject"            ' The subject of your email; required only if you send the CSV over email
body = "Put a message here!"      ' The body of your email; required only if you send the CSV over email
smtp = "mail.server.com"          ' Name of your SMTP server; required only if you send the CSV over email
smtpPort = 25                     ' SMTP port used by your server, usually 25; required only if you send the CSV over email
sqlStr = "select user from dual"  ' SQL statement you wish to execute
'''''''''''''''''''''
' End Configuration '
'''''''''''''''''''''



dim fso, conn

'Create filesystem object 
set fso = CreateObject("Scripting.FileSystemObject")

'Database connection info
set Conn = CreateObject("ADODB.connection")
Conn.ConnectionTimeout = 30
Conn.CommandTimeout = 30
if dbType = "oracle" then
	conn.open("Provider=MSDAORA.1;User ID=" & dbUser & ";Password=" & dbPass & ";Data Source=" & dbName & ";Persist Security Info=False")
elseif dbType = "sqlserver" then
	conn.open("Driver={SQL Server};Server=" & dbHost & ";Database=" & dbName & ";Uid=" & dbUser & ";Pwd=" & dbPass & ";")
elseif dbType = "mysql" then
	conn.open("DRIVER={MySQL ODBC 3.51 Driver}; SERVER=" & dbHost & ";PORT=3306;DATABASE=" & dbName & "; UID=" & dbUser & "; PASSWORD=" & dbPass & "; OPTION=3")
end if

' Subprocedure to generate data.  Two parameters:
'   1. fPath=where to create the file
'   2. sqlstr=the database query
sub MakeDataFile(fPath, sqlstr)
	dim a, showList, intcount
	set a = fso.createtextfile(fPath)
	
	set showList = conn.execute(sqlstr)
	for intcount = 0 to showList.fields.count -1
		if intcount <> showList.fields.count-1 then
			a.write """" & showList.fields(intcount).name & ""","
		else
			a.write """" & showList.fields(intcount).name & """"
		end if
	next
	a.writeline ""
	
	do while not showList.eof
		for intcount = 0 to showList.fields.count - 1
			if intcount <> showList.fields.count - 1 then
				a.write """" & showList.fields(intcount).value & ""","
			else
				a.write """" & showList.fields(intcount).value & """"
			end if
		next
		a.writeline ""
		showList.movenext
	loop
	showList.close
	set showList = nothing

	set a = nothing
end sub

' Call the subprocedure
call MakeDataFile(outputFile,sqlstr)

' Close
set fso = nothing
conn.close
set conn = nothing

if email <> "" then
	dim objMessage
	Set objMessage = CreateObject("CDO.Message")
	objMessage.Subject = "Test Email from vbs"
	objMessage.From = email
	objMessage.To = email
	objMessage.TextBody = "Please see attached file."
	objMessage.AddAttachment outputFile
	
	objMessage.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
	objMessage.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserver") = smtp
	objMessage.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = smtpPort
	
objMessage.Configuration.Fields.Update
	
	objMessage.Send
end if

'You're all done!!  Enjoy the file created.
msgbox("Data Writer Done!")

Creating database links

The following example assumes you have two databases, db1 and db2. You are currently logged into db1 and need a link to db2. We also assume the entry for db2 has already been established in the server’s TNSNAMES.ORA file.

-- Fixed user method
create database link db2
connect to scott
identified by tiger 
using 'db2';

-- Connected user method
create database link db2
using 'db2';

-- Current user method
create database link db2
connect to current_user 
using 'db2';

The difference between the three methods above is user authentication. In the fixed user example, regardless of who you are in db1, you will always access db2 as the user “scott”. With the connected user method, the link will always connect to db2 as whoever created the link; naturally, with this method, you will need to make sure you have a valid account in db2 as well. With the current user method, the link will always connect to db2 as whoever is using the link; this is arguably the most secure method.

If you need the link to be seen by every user, you may consider a public link:

create public database link db2
using 'db2';

With the database link established, you are now able to perform DML queries in db2 from db1. Here’s an example:

select a.name, b.telephone
from myschema.user_list a, scott.address_book@db2 b
where a.name=b.name;

Note the usage of “@db2” in the above example which signifies the usage of the database link.

One thing to keep in mind when establishing database links is security, thus, make sure you understand the potential risks of your design before implementing it. Fixed user method is the most commonly used one, and the fixed user typically ends up having more privileges in the remote database than necessary, therefore allowing users in db1 to do more things in db2 than necessary. Another concern is with how Oracle stores fixed user passwords — they are stored in the SYS.LINK$ table in clear text!

Oracle database health checks

Checking for any down data files:

select file_name, tablespace_name, status, bytes/1000000 as mb, maxbytes/1000000 as mb_max 
from dba_data_files 
where status<>'AVAILABLE';

Checking for any down control files:

select name, status
from v$controlfile 
where status is not null;

Checking for any down tablespaces:

select tablespace_name, status, contents, extent_management
from dba_tablespaces
where status<>'ONLINE';

Checking to see if a tablespace is getting close to max size (less than 10% free):

select total.name as tblspace,
  nvl(free_space, 0) as free,
  nvl(total_space-free_space, 0) as used,
  total_space,
  round(nvl(free_space,0)/total_space,2)*100 as pct_free
from 
  (
    select tablespace_name, sum(bytes/1024/1024) as free_Space 
    from sys.dba_free_space 
    group by tablespace_name
  ) free, 
  (
    select b.name, sum(bytes/1024/1024) as total_space
    from sys.v_$datafile a, sys.v_$tablespace b 
    where a.ts# = b.ts#
    group by b.name
  ) total 
where free.tablespace_name(+) = total.name 
and total.name='TABLESPACE_NAME_HERE' 
and round(nvl(free_space, 0)/total_space,2)<=0.10;

Suspicious of someone stealing your system passwords and logging in remotely? Use this to check for system logins from machines you do not expect.

select username, osuser, program, machine 
from sys.v_$session 
where upper(machine) in('LIST_VALID_MACHINES_HERE','DBA_PC','BOSS_PC') 
and upper(username) in('SYS','SYSTEM');

Identify database locks

This select statement can be run by someone with DBA privileges (as it makes use of v$ views) to list who is locking what.

select a.session_id,
  c.serial#,
  a.oracle_username,
  a.os_user_name,
  c.machine,
  a.locked_mode,
  b.object_name,
  b.object_type
from v$locked_object a,dba_objects b, v$session c
where a.object_id = b.object_id
  and a.session_id = c.sid;

The query below is more specific. With it, you will be able to identify exactly which session is block another session, if any such conditions exist.

select s1.username || '@' || s1.machine || ' ( SID=' || s1.sid || ' ) is blocking ' || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' as blocking
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.block=1 and l2.request > 0
and l1.id1 = l2.id1 and l2.id2 = l2.id2;

If necessary, we can use the following example to kill a particular session. It may be useful if the locking session is actually from a hanging process, for example, though typically we should have the user release his/her lock gracefully.

alter system kill session '123,5555' immediate;

Note that the “123” above should be replaced by the session ID, and the “5555” should be replaced by the serial #. Those two values are shown in the first two fields in the select SQL statement above.

Oracle user management

Create a new user:

create user bob
identified by str0nggpAsswd
default tablespace tblspace1
temporary tablespace temp;

Change user password:

alter user bob
identified by n3wPaswrd0;

Unlock a locked user account:

alter user bob
account unlock;

Assign privileges to user:

-- Allow user to connect to the database
grant create session to bob;

-- Allow user to create various objects;
-- usually reserved to DBAs and developers.
grant create table to bob;
grant create view to bob;
grant create sequence to bob;

-- Grant/revoke various table/view rights to objects to a user.
grant select on address_table to bob;
grant insert, update, delete on address_table to bob;
grant all on address_table to bob;

revoke delete on address_table from bob;
revoke all on address_table from bob;

-- Grant/revoke execute rights on functions/procedures for a user.
grant execute on sp_validate_address to bob;
revoke execute on sp_validate_address from bob;

Lock/unlock a user:

alter user bob account lock;
alter user bob account unlock;

Remove/delete a user:

drop user bob cascade;

List Oracle users:

-- List all users
select username, user_id, account_status, default_tablespace, temporary_tablespace, profile 
from dba_users;

-- List active users only
select username, user_id, account_status, default_tablespace, temporary_tablespace, profile 
from dba_users
where account_status='OPEN';

Create an auto-increment field in Oracle

If we do not already have a table to work with, we will create a sample table for this purpose.

create table schema.test_table (
	row_id		number(5),
	name		varchar2(50),
	status		varchar2(1),
	entry_date	date
);

Let us say we want to make the “row_id” field an “auto-increment” field. To do so, we need a sequence object first. The code below gives us an sequence that starts off at 1 and capped at 99999.

create sequence schema.seq_test_table_row_id
minvalue 1
maxvalue 99999
start with 1
increment by 1
nocache;

All we have left is the actual “auto-increment” part. To do this, we build a very simple trigger on the table.

create trigger schema.trg_test_table_row_id
before insert on schema.test_table
for each row
begin
	if (:NEW.row_id) is null then
		select schema.seq_test_table_row_id.nextval into :NEW.row_id from dual;
	end if;
end;

Now, when you insert into the table schema.test_table, if you wish to use the sequence object as an auto-incrementer, just do not enter the row_id field. Two examples of using this auto-incrementer below:

insert into schema.test_table (name, status, entry_date) values('John Doe','1',sysdate);
insert into schema.test_table values(null,'John Doe','1',sysdate);

What if we want to override this trigger and use a number of our choice for the row_id field? No problem, just insert the value as you would normally. Note the trigger has an if-clause in it; the trigger will only have an effect (ie. auto-increment from the sequence) if the incoming row_id field is not populated.

Cleaning up the listener log

Since we noticed the listener log has grown out of control, we should already know the location of it. Just in case you need it, by default that log file is located at “NETWORK/log/listener.log” in your Oracle home.

The listener log cannot be modified while the listener is active, and more than likely, we need to minimize listener down time so we interrupt users the least. To do so, we are going to create a batch file so that the three commands we are going to run will run successively in a short amount of time. Alternatively, you can also perform them during business off-hours, if that is available to you. Either running in a batch file or running them manually, the commands are the same.

lsnrctl set log_status off
rename listener.log listener.old
lsnrctl set log_status on

The first commands is used to disable the listener, which will release the file lock on the listener.log file so we can work with it. The second command renames the file. Finally, the third command restarts the listener to resume normal operations. Immediately after the third command runs, a new listener.log file will be created with 0 size.

What to do with the old listener log “listener.old” is up to each of us. Personally, I tend to zip up the file and store it at a safe place, just in case I ever need to review it.

Using Oracle Data Pump

To use Data Pump, a directory must be set up. In the SQL*Plus code example below, we are creating a directory dedicated for Data Pump usage.

create or replace directory datapumpdir as ‘d:datadump’;

With the Data Pump directory set, we can now run the following command line to export data. There are two examples below; the first is a full database export (ie. all schemas), and the second exports only one schema.

expdp username/password@tnsname directory=datapumpdir dumpfile=full%U.dmp filesize=2G logfile=full.log full=y

expdp username/password@tnsname directory=datadumpdir dumpfile=schema%U.dmp filesize=2G logfile=schema.log schemas=scott

Note the %U variable in the dump file name. Because we set the maximum file size at 2gb, we introduced a potential error point: what happens if our dump file exceeds 2gb? Because we set a %U variable at the end of the file name, it means our first dump file will be tagged with a number “01”, eg. full01.dmp. If full01.dmp exceeds 2gb in size, the file full02.dmp will be created automatically. full03.dmp will be created if full02.dmp reaches 2gb in size, and so on.

To import, we need to first make sure the receiving database has the directory set, and then we can run the command:

impdp username/password@tnsname schemas=scott directory=datapumpdir dumpfile=full%U.dmp logfile=full.log

Below is the help file for the expdp command, for our reference.

Export: Release 10.1.0.4.0 - Production on Monday, 24 March, 2008 11:22

Copyright (c) 2003, Oracle.  All rights reserved.


The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

   Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

   Format:  expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to unload where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dumpfiles and logfiles.
DUMPFILE              List of destination dump files (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
TABLES                Identifies a list of tables to export - one schema only.
TABLESPACES           Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command               Description
------------------------------------------------------------------------------
ADD_FILE              Add dumpfile to dumpfile set.
                      ADD_FILE=dumpfile-name
CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT           Quit client session and leave job running.
HELP                  Summarize interactive commands.
KILL_JOB              Detach and delete job.
PARALLEL              Change the number of active workers for current job.
                      PARALLEL=.
START_JOB             Start/resume current job.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
                      STATUS=[interval]
STOP_JOB              Orderly shutdown of job execution and exits the client.
                      STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                      Data Pump job.

Below is the help text for impdp command.

Import: Release 10.1.0.4.0 - Production on Monday, 24 March, 2008 11:26

Copyright (c) 2003, Oracle.  All rights reserved.


The Data Pump Import utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Import runs by entering the 'impdp' command followed
by various parameters. To specify parameters, you use keywords:

     Format:  impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to load where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dump, log, and sql files.
DUMPFILE              List of dumpfiles to import from (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Import everything from source (Y).
HELP                  Display help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of import job to create.
LOGFILE               Log file name (import.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile.
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to import a subset of a table.
REMAP_DATAFILE        Redefine datafile references in all DDL statements.
REMAP_SCHEMA          Objects from one schema are loaded into another schema.
REMAP_TABLESPACE      Tablespace object are remapped to another tablespace.
REUSE_DATAFILES       Tablespace will be initialized if it already exists (N).
SCHEMAS               List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE               Write all the SQL DDL to a specified file.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION   Action to take if imported object already exists.
                      Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES                Identifies a list of tables to import.
TABLESPACES           Identifies a list of tablespaces to import.
TRANSFORM             Metadata transform to apply (Y/N) to specific objects.
                      Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
                      ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
TRANSPORT_DATAFILES   List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
                      Only valid in NETWORK_LINK mode import operations.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.
                      Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command               Description (Default)
------------------------------------------------------------------------------
CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT           Quit client session and leave job running.
HELP                  Summarize interactive commands.
KILL_JOB              Detach and delete job.
PARALLEL              Change the number of active workers for current job.
                      PARALLEL=.
START_JOB             Start/resume current job.
                      START_JOB=SKIP_CURRENT will start the job after skipping
                      any action which was in progress when job was stopped.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
                      STATUS=[interval]
STOP_JOB              Orderly shutdown of job execution and exits the client.
                      STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                      Data Pump job.

TNS Listener service missing

When installing a new Oracle database, by default a new database is installed, but that step is sometimes bypassed because you might not need it. When this initial database is not created, sometimes the install would fail to create the TNS Listener. To remedy this problem, all you have to do is run this command:

lsnrctl start

Before you run this command, you should take a look at the tnsnames.ora, listener.ora, and sqlnet.ora files to make sure they are properly configured.