Change Leopard’s login background

This set of instructions is written for Mac OSX version 10.5.2.

First, run this command in terminal to back up your default photo, just in case you ever need it again.

sudo mv /System/Library/CoreServices/DefaultDesktop.jpg /System/Library/CoreServices/DefaultDesktopBackup.jpg

Then, run the following terminal command to copy the new background to the system library. In this example below, we assume newBackground.jpg is the photo of your choosing.

sudo cp newBackground.jpg /System/Library/CoreServices/DefaultDesktop.jpg

The next time you login, you will see your new photo replacing the default Mac one.

If you decide you wish to revert this change, just run the following terminal command to copy the default image back.

sudo cp /System/Library/CoreServices/DefaultDesktopBackup.jpg /System/Library/CoreServices/DefaultDesktop.jpg

Checking admin share free space and folder space usage

The code to perform these two space checks is below, with the output file being generated in comma-delimited CSV format. When we use this, we need to first set a few configuration items in the first section.

  • outputFile : Enter the path and file name of the output report.
  • i : The number of items you wish to check; in the example, the number is 4 (2 drives and 2 folders)
  • arrayList(x, y) : These are the details of each drive/folder you wish to check…

The code is as follows:

option explicit
dim outputFile, arrayList, fso, obj, fl, j, drivepath, driveName, status, msg
'''''''Config''''''''''

' Where do you want to output the results to?
outputFile = "checkSpace.log"

' How many drives do you want to check for free space?
const i = 4
redim arrayList(i,5) 'Please do not touch this line.

' List the server drives you want to check for free space or folders to check for space usage
' Each set has 5 values:
' a. D=drive, F=folder
' b. server hostname
' c. admin share drive letter (applicable for type D only)
' d. warning level in gb
' e. alarm level in gb
' f. common name of this share

arrayList(0,0) = "D"
arrayList(0,1) = "web1"
arrayList(0,2) = "c"
arrayList(0,3) = 10
arrayList(0,4) = 5
arrayList(0,5) = "Windows 2003 web server, C-drive"

arrayList(1,0) = "D"
arrayList(1,1) = "exch2"
arrayList(1,2) = "d"
arrayList(1,3) = 200
arrayList(1,4) = 100
arrayList(1,5) = "Windows 2003 Exchange server, D-drive"

arrayList(2,0) = "F"
arrayList(2,1) = "\fileserverpublicfileShare1"
arrayList(2,2) = ""
arrayList(2,3) = 1
arrayList(2,4) = 2
arrayList(2,5) = "File share 1"

arrayList(3,0) = "F"
arrayList(3,1) = "\fileserverpublicfileShare2"
arrayList(3,2) = ""
arrayList(3,3) = 100
arrayList(3,4) = 200
arrayList(3,5) = "File share 2"

'''''''End Config'''''''''

set fso = CreateObject("Scripting.FileSystemObject")
set fl = fso.CreateTextFile(outputFile, true)

fl.writeline("""Item"",""Status"",""Message""")

j = 0
do while j <= i-1
	if arrayList(j,0) = "D" then
		drivepath = "\\" & arrayList(j,1) & "\" & arrayList(j,2) & "$"
		set obj = fso.GetDrive(fso.GetDriveName(drivepath))
	elseif arrayList(j,0) = "F" then
		drivepath = arrayList(j,1)
		set obj = fso.GetFolder(drivepath)
	else
		' shouldn't really get in here...
	end if
		
	driveName = arrayList(j,5)
	
	if arrayList(j,0) = "D" then
		if round(obj.FreeSpace/1024/1024/1024) < arrayList(j,4) then
			status = "alarm"
			msg = drivepath & " (" & driveName & ") only has " & round(obj.FreeSpace/1024/1024/1024) & "gb free"
		else
			if round(obj.FreeSpace/1024/1024/1024) < arrayList(j,3) then
				status = "warning"
				msg = drivepath & " (" & driveName & ") only has " & round(obj.FreeSpace/1024/1024/1024) & "gb free"
			else
				status = "ok"
				msg = drivepath & " (" & driveName & ") is ok with " & round(obj.FreeSpace/1024/1024/1024) & "gb free"
			end if
		end if
	elseif arrayList(j,0) = "F" then
		if round(obj.size/1024/1024/1024) > arrayList(j,4) then
			status = "alarm"
			msg = drivepath & " (" & driveName & ") has reached " & round(obj.size/1024/1024/1024) & "gb"
		elseif round(obj.size/1024/1024/1024) > arrayList(j,3) then
			status = "warning"
			msg = drivepath & " (" & driveName & ") has reached " & round(obj.size/1024/1024/1024) & "gb"
		else
			status = "ok"
			msg = drivepath & " (" & driveName & ") is ok at " & round(obj.size/1024/1024/1024) & "gb used"
		end if
	else
		status = "error"
		msg = "Configuration error"
	end if
	
	fl.writeline("""" & drivepath & """,""" & status & """,""" & msg & """")
	
	set obj = nothing
	j = j+1
loop

set fl=nothing
set fso=nothing

The output CSV file should look something like this.

"Item","Status","Message"
"\\web1\c$","ok","\\web1\c$ (Windows 2003 web server, C-drive) is ok with 10gb free"
"\\exch2\c$","alarm","\\exch2\c$ (Windows 2003 Exchange server, D-drive) only has 16gb free"
"\\fileserver\public\fileShare1","warning","\\fileserver\public\fileShare1 (File share 1) has reached 2gb"
"\\fileserver\public\fileShare2","ok","\\fileserver\public\fileShare2 (File sahre2) is ok at 91gb used"

Using .htaccess file to control web directory access

Naturally, if the .htaccess (ht.acl in Windows) does not already exist in the directory we wish to protect, we must create it first. It is a plain text file, so you may use any text editor to create/modify this file, such as pico, emacs, Notepad, or TextEdit.

Our first step is to add these lines below to the .htaccess file.

AuthName "This is a restricted area, please log in first."
AuthType Basic
AuthUserFile /directory/path/passwdfile

AuthName is the text that will appear in the browser pop-up when the user is challenged. AuthType value of “Basic” means we are using basic HTTP authentication. AuthUserFile is the path and file name of our password file; more on that later.

Also in the .htaccess file, we add a list of user names we wish to allow to access the web directory we are locking down. For example:

require user jdoe
require user spannu

We are now done with the .htaccess file. Now we just have to create the password file. In the Apache bin, there is an executable called “htpasswd”. The first example below is used to create a new password file with the user “jdoe”; note that when using the -c parameter to create a new file, we will overwrite any password file that exists in the same directory, so be careful. To add a new user to an existing file, we should run the second example, the difference being the lack of the -c parameter.

htpasswd -c -b /directory/path/passwdfile jdoe secUr3Pwd

htpasswd -b /directory/path/passwdfile spannu an0therPwd

The -b parameter allows us to type in the password in the command line, which is helpful when you are setting up a script that creates a large number of users at once. If having the password in the command line cache is a concern, just remove the -b parameter, and we will be prompted to enter a password for each user.

We should now be all set. The next web visitor that reaches the directory where the .htaccess file resides should be challenged with a password prompt.

To remove a user from a certain password file:

htpasswd -D /directory/path/passwdfile jdoe

For our reference, below is the help text for the htpasswd command.

Usage:
        htpasswd [-cmdpsD] passwordfile username
        htpasswd -b[cmdpsD] passwordfile username password

        htpasswd -n[mdps] username
        htpasswd -nb[mdps] username password
 -c  Create a new file.
 -n  Don't update file; display results on stdout.
 -m  Force MD5 encryption of the password (default).
 -d  Force CRYPT encryption of the password.
 -p  Do not encrypt the password (plaintext).
 -s  Force SHA encryption of the password.
 -b  Use the password from the command line rather than prompting for it.
 -D  Delete the specified user.
On Windows, NetWare and TPF systems the '-m' flag is used by default.
On all other systems, the '-p' flag will probably not work.

Using Oracle Data Pump

To use Data Pump, a directory must be set up. In the SQL*Plus code example below, we are creating a directory dedicated for Data Pump usage.

create or replace directory datapumpdir as ‘d:datadump’;

With the Data Pump directory set, we can now run the following command line to export data. There are two examples below; the first is a full database export (ie. all schemas), and the second exports only one schema.

expdp username/password@tnsname directory=datapumpdir dumpfile=full%U.dmp filesize=2G logfile=full.log full=y

expdp username/password@tnsname directory=datadumpdir dumpfile=schema%U.dmp filesize=2G logfile=schema.log schemas=scott

Note the %U variable in the dump file name. Because we set the maximum file size at 2gb, we introduced a potential error point: what happens if our dump file exceeds 2gb? Because we set a %U variable at the end of the file name, it means our first dump file will be tagged with a number “01”, eg. full01.dmp. If full01.dmp exceeds 2gb in size, the file full02.dmp will be created automatically. full03.dmp will be created if full02.dmp reaches 2gb in size, and so on.

To import, we need to first make sure the receiving database has the directory set, and then we can run the command:

impdp username/password@tnsname schemas=scott directory=datapumpdir dumpfile=full%U.dmp logfile=full.log

Below is the help file for the expdp command, for our reference.

Export: Release 10.1.0.4.0 - Production on Monday, 24 March, 2008 11:22

Copyright (c) 2003, Oracle.  All rights reserved.


The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

   Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

   Format:  expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to unload where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dumpfiles and logfiles.
DUMPFILE              List of destination dump files (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
TABLES                Identifies a list of tables to export - one schema only.
TABLESPACES           Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command               Description
------------------------------------------------------------------------------
ADD_FILE              Add dumpfile to dumpfile set.
                      ADD_FILE=dumpfile-name
CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT           Quit client session and leave job running.
HELP                  Summarize interactive commands.
KILL_JOB              Detach and delete job.
PARALLEL              Change the number of active workers for current job.
                      PARALLEL=.
START_JOB             Start/resume current job.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
                      STATUS=[interval]
STOP_JOB              Orderly shutdown of job execution and exits the client.
                      STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                      Data Pump job.

Below is the help text for impdp command.

Import: Release 10.1.0.4.0 - Production on Monday, 24 March, 2008 11:26

Copyright (c) 2003, Oracle.  All rights reserved.


The Data Pump Import utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Import runs by entering the 'impdp' command followed
by various parameters. To specify parameters, you use keywords:

     Format:  impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to load where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dump, log, and sql files.
DUMPFILE              List of dumpfiles to import from (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Import everything from source (Y).
HELP                  Display help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of import job to create.
LOGFILE               Log file name (import.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile.
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to import a subset of a table.
REMAP_DATAFILE        Redefine datafile references in all DDL statements.
REMAP_SCHEMA          Objects from one schema are loaded into another schema.
REMAP_TABLESPACE      Tablespace object are remapped to another tablespace.
REUSE_DATAFILES       Tablespace will be initialized if it already exists (N).
SCHEMAS               List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE               Write all the SQL DDL to a specified file.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION   Action to take if imported object already exists.
                      Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES                Identifies a list of tables to import.
TABLESPACES           Identifies a list of tablespaces to import.
TRANSFORM             Metadata transform to apply (Y/N) to specific objects.
                      Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
                      ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
TRANSPORT_DATAFILES   List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
                      Only valid in NETWORK_LINK mode import operations.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.
                      Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command               Description (Default)
------------------------------------------------------------------------------
CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT           Quit client session and leave job running.
HELP                  Summarize interactive commands.
KILL_JOB              Detach and delete job.
PARALLEL              Change the number of active workers for current job.
                      PARALLEL=.
START_JOB             Start/resume current job.
                      START_JOB=SKIP_CURRENT will start the job after skipping
                      any action which was in progress when job was stopped.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
                      STATUS=[interval]
STOP_JOB              Orderly shutdown of job execution and exits the client.
                      STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                      Data Pump job.

TNS Listener service missing

When installing a new Oracle database, by default a new database is installed, but that step is sometimes bypassed because you might not need it. When this initial database is not created, sometimes the install would fail to create the TNS Listener. To remedy this problem, all you have to do is run this command:

lsnrctl start

Before you run this command, you should take a look at the tnsnames.ora, listener.ora, and sqlnet.ora files to make sure they are properly configured.

How to output a string in PHP

In PHP, to output a line, we can use either the echo() or print() functions. All six examples below have the same output of “hello world”:

echo('hello world');
print('hello world');
print 'hello world';

echo("hello world");
print("hello world");
print "hello world";

Note the print() function can work without parenthesis. That is because print() is not really a function; print() is actually a language construct. Functions print() and echo() pretty much do the same thing, so generally you can use whichever one you prefer, though echo() actually gets processed faster, so if you are working with a larger application, this may be something to keep in mind.

To output a string variable, we can do something like this:

$name = "world";

echo('hello '.$name);
print('hello '.$name);
print 'hello '.$name;

echo("hello ".$name);
print("hello ".$name);
print "hello ".$name;

echo("hello $name");
print("hello $name");
print "hello $name";

Once again, all examples above output “hello world”, and the character used to concatenate the “hello” and the variable that contains “world” is the period character; the period is equivalent to the ampersand sign in VB/ASP and the plus sign in Java/JSP. Now, see the third set above where we used double quotes to enclose the string and the variable together. This is a special functionality you can use only with double quotes. It is a way to make your coding process slightly simpler, and the resulting code slightly easier to read, but keep in mind that this method makes the code run just a tad bit slower because when the PHP engine sees the double quote, it needs to be prepared to find and process variables — even if none are found.

In conclusion, echo() and print() are nearly identical, and the usage of double and single quotes are nearly identical, and you may choose whichever one that suits your style better. In terms of optimizing processing speed, however trivial, using echo() with single quote is preferred.