Przeglądaj źródła

- import berkeley database module into trunk
- Many thanks to William Quan from Cisco Systems for the contribution
- based on patch #1803180, renamed from berkely_db to db_berkeley
- Some work still remains:
- build berkley_db as own debian package
- add berkley_db to the xml database creation process to generate the
content in scripts/db_berkley/openser like the dbtext stuff
- evaluate if its possible to use the db_free_row and db_free_rows
functions for this module
- port to new logging system


git-svn-id: https://openser.svn.sourceforge.net/svnroot/openser/trunk@2844 689a6050-402a-0410-94f2-e92a70836424

Henning Westerholt 18 lat temu
commit
326fba488a

+ 19 - 0
modules/db_berkeley/Makefile

@@ -0,0 +1,19 @@
+# $Id:  $
+#
+# example module makefile
+#
+# 
+# WARNING: do not run this directly, it should be run by the master Makefile
+
+# extra debug messages
+# -DSC_EXTRA_DEBUG is optional
+DEFS +=-I$(LOCALBASE)/include -I$(LOCALBASE)/BerkeleyDB.4.6/include \
+	-I$(SYSBASE)/include
+
+
+include ../../Makefile.defs 
+auto_gen=
+NAME=db_berkeley.so
+LIBS+=-L$(LOCALBASE)/lib -L$(SYSBASE)/lib -L$(LOCALBASE)/BerkeleyDB.4.6/lib -ldb
+
+include ../../Makefile.modules

+ 482 - 0
modules/db_berkeley/README

@@ -0,0 +1,482 @@
+
+Berkeley DB Module
+
+Will Quan
+
+   Cisco Systems
+
+Edited by
+
+Will Quan
+
+   Copyright © 2007 Cisco Systems
+     _________________________________________________________
+
+   Table of Contents
+   1. User's Guide
+
+        1.1. Overview
+        1.2. Dependencies
+
+              1.2.1. OpenSER Modules
+              1.2.2. External Libraries or Applications
+
+        1.3. Exported Parameters
+
+              1.3.1. auto_reload (integer)
+              1.3.2. log_enable (integer)
+              1.3.3. journal_roll_interval (integer seconds)
+
+        1.4. Exported Functions
+        1.5. Installation and Running
+        1.6. Database Schema and Metadata
+        1.7. METADATA_COLUMNS (required)
+        1.8. METADATA_KEYS (required)
+        1.9. METADATA_READONLY (optional)
+        1.10. METADATA_LOGFLAGS (optional)
+        1.11. Maintaince Shell Script : db_berkeley.sh
+        1.12. DB Recovery : bdb_recover
+
+   2. Developer's Guide
+   3. Frequently Asked Questions
+
+   List of Examples
+   1-1. Set auto_reload parameter
+   1-2. Set log_enable parameter
+   1-3. Set journal_roll_interval parameter
+   1-4. 1
+   1-5. 2
+   1-6. 3
+   1-7. contents of version table
+   1-8. METADATA_COLUMNS
+   1-9. METADATA_KEYS
+   1-10. METADATA_LOGFLAGS
+   1-11. db_berkeley.sh usageS
+   1-12. bdb_recover usage
+     _________________________________________________________
+
+Chapter 1. User's Guide
+
+1.1. Overview
+
+   This is a module which integrates the Berkeley DB into
+   OpenSER. It implements the DB API defined in OpenSER.
+     _________________________________________________________
+
+1.2. Dependencies
+
+1.2.1. OpenSER Modules
+
+   The following modules must be loaded before this module:
+
+     * No dependencies on other OpenSER modules.
+     _________________________________________________________
+
+1.2.2. External Libraries or Applications
+
+   The following libraries or applications must be installed
+   before running OpenSER with this module loaded:
+
+     * Berkeley Berkeley DB 4.5 - an embedded database.
+     _________________________________________________________
+
+1.3. Exported Parameters
+
+1.3.1. auto_reload (integer)
+
+   The auto-reload will close and reopen a Berkeley DB when the
+   files inode has changed. The operation occurs only duing a
+   query. Other operations such as insert or delete, do not
+   invoke auto_reload.
+
+   Default value is 0 (1 - on / 0 - off). 
+
+   Example 1-1. Set auto_reload parameter
+...
+modparam("db_berkeley", "auto_reload", 1)
+...
+     _________________________________________________________
+
+1.3.2. log_enable (integer)
+
+   The log_enable boolean controls when to create journal files.
+   The following operations can be journaled: INSERT, UPDATE,
+   DELETE. Other operations such as SELECT, do not. This
+   journaling are required if you need to recover from a corrupt
+   DB file. That is, bdb_recover requires these to rebuild the db
+   file. If you find this log feature useful, you may also be
+   interested in the METADATA_LOGFLAGS bitfield that each table
+   has. It will allow you to control which operations to journal,
+   and the destination (like syslog, stdout, local-file). Refer
+   to sclib_log() and documentation on METADATA.
+
+   Default value is 0 (1 - on / 0 - off). 
+
+   Example 1-2. Set log_enable parameter
+...
+modparam("db_berkeley", "log_enable", 1)
+...
+     _________________________________________________________
+
+1.3.3. journal_roll_interval (integer seconds)
+
+   The journal_roll_interval will close and open a new log file.
+   The roll operation occurs only at the end of writing a log, so
+   it is not guaranteed to to roll 'on time'.
+
+   Default value is 0 (off). 
+
+   Example 1-3. Set journal_roll_interval parameter
+...
+modparam("db_berkeley", "journal_roll_interval", 3600)
+...
+     _________________________________________________________
+
+1.4. Exported Functions
+
+   No function exported to be used from configuration file.
+     _________________________________________________________
+
+1.5. Installation and Running
+
+   First download, compile and install the Berkeley DB. This is
+   outside the scope of this document. Documentation for this
+   procedure is available on the Internet.
+
+   Next, we setup to compile OpenSER with the db_berkeley module.
+   In the directory /modules/db_berkeley , modify the Makefile to
+   point to your distribution of Berkeley DB.
+
+   You may also define 'SC_EXTRA_DEBUG' to compile in extra debug
+   logs. However, it is not a recommended deployment to
+   production servers. Because the module dependes on an external
+   library, the db_berkeley module is not compiled and installed
+   by default. You can use one of the next options.
+
+     * edit the "Makefile" and remove "db_berkeley" from
+       "excluded_modules" list. Then follow the standard
+       procedure to install OpenSER: "make all; make install".
+     * from command line use: 'make all
+       include_modules="db_berkeley"; make install
+       include_modules="db_berkeley"'.
+
+   Installation of OpenSER is performed by simply running make
+   install as root user of the main directory. This will install
+   the binaries in /usr/local/sbin/. If this was successful, the
+   scripts/db_berkeley.sh file should now be installed as
+   /usr/local/sbin/openser_db_berkeley.sh
+
+   Once you decide where you want to install the Berkeley DB
+   files, for instance '/var/db_berkeley/bdb', we must initially
+   create the files there. OpenSER will not startup without these
+   DB files already existing. Here are a couple of ways to do
+   this:
+
+   Example 1-4. 1
+export DB_HOME=/var/db_berkeley/bdb ; /usr/local/sbin/openser_db_berkel
+ey.sh create
+
+   This way, any later operations with openser_db_berkeley.sh
+   will not require you to provide the path to your DB files.
+   Alternately, you can specify them on the command line like
+   this:
+
+   Example 1-5. 2
+/usr/local/sbin/openser_db_berkeley.sh create /var/db_berkeley/bdb
+
+   After this creation step, the DB files are now seeded with the
+   necessary meta-data for OpenSER to startup. For a description
+   of the meta-data refer to the section about db_berkeley.sh
+   operations. Modify the OpenSER configuration file to use
+   db_berkeley. The database URL for modules must be the path to
+   the directory where the Berkeley DB table-files are located,
+   prefixed by "db_berkeley://", e.g.,
+   "db_berkeley:///var/db_berkeley/bdb". If you require the DB
+   file to automatically reload be sure to include the modparam
+   line for that.
+
+   A couple other things to consider are the 'db_mode' and the
+   'use_domain' modparams, as they will impact things as well.
+   The best description of these parameters are found in usrloc
+   documentation.
+
+   The '|' pipe character is used as a record delimiter within
+   this Berkeley DB implementation and must not be present in any
+   DB field.
+     _________________________________________________________
+
+1.6. Database Schema and Metadata
+
+   Each Berkeley DB is must be manually, initially created via
+   the openser_db_berkeley.sh maintenance utility. This section
+   provides some details as to the content and format of the DB
+   file upon creation.
+
+   Since the Berkeley DB stores key value pairs, the database is
+   seeded with a few meta-data rows . The keys to these rows must
+   begin with 'METADATA'. Here is an example of table meta-data,
+   taken from the table 'version'.
+
+   Example 1-6. 3
+METADATA_COLUMNS
+table_name(str) table_version(int)
+METADATA_KEY
+0
+
+   In the above example, the row METADATA_COLUMNS defines the
+   column names and type, and the row METADATA_KEY defines which
+   column(s) form the key. Here the value of 0 indicates that
+   column 0 is the key(ie table_name). With respect to column
+   types, the db_berkeley modules only has the following types:
+   string, str, int, double, and datetime. The default type is
+   string, and is used when one of the others is not specified.
+   The columns of the meta-data are delimited by whitespace.
+
+   The actual column data is stored as a string value, and
+   delimited by the '|' pipe character. Since the code tokenizes
+   on this delimiter, it is important that this character not
+   appear in any valid data field. The following is the output of
+   the 'db_berkeley.sh dump version' command. It shows contents
+   of table 'version' in plain text.
+
+   Example 1-7. contents of version table
+VERSION=3
+format=print
+type=hash
+h_nelem=21
+db_pagesize=4096
+HEADER=END
+ METADATA_READONLY
+ 1
+ address|
+ address|3
+ aliases|
+ aliases|1004
+ dbaliases|
+ dbaliases|1
+ domain|
+ domain|1
+ gw_grp|
+ gw_grp|1
+ gw|
+ gw|4
+ speed_dial|
+ speed_dial|2
+ subscriber|
+ subscriber|6
+ uri|
+ uri|1
+ METADATA_COLUMNS
+ table_name(str) table_version(int)
+ METADATA_KEY
+ 0
+ acc|
+ acc|4
+ grp|
+ grp|2
+ lcr|
+ lcr|2
+ location|
+ location|1004
+ missed_calls|
+ missed_calls|3
+ re_grp|
+ re_grp|1
+ silo|
+ silo|5
+ trusted|
+ trusted|4
+ usr_preferences|
+ usr_preferences|2
+DATA=END
+     _________________________________________________________
+
+1.7. METADATA_COLUMNS (required)
+
+   The METADATA_COLUMNS row contains the column names and types.
+   Each is space delimited. Here is an example of the data taken
+   from table subscriber :
+
+   Example 1-8. METADATA_COLUMNS
+METADATA_COLUMNS
+username(str) domain(str) password(str) ha1(str) ha1b(str) first_name(s
+tr) last_name(str) email_address(str) datetime_created(datetime) timezo
+ne(str) rpid(str)
+
+
+   Related (hardcoded) limitations:
+
+     * maximum of 32 columns per table.
+     * maximum tablename size is 64.
+     * maximum data length is 2048
+
+   Currently supporting these five types: str, datetime, int,
+   double, string.
+     _________________________________________________________
+
+1.8. METADATA_KEYS (required)
+
+   The METADATA_KEYS row indicates the indexes of the key
+   columns, with respect to the order specified in
+   METADATA_COLUMNS. Here is an example taken from table
+   subscriber that brings up a good point:
+
+   Example 1-9. METADATA_KEYS
+ METADATA_KEY
+ 0 1
+
+
+   The point is that both the username and domain name are
+   require as the key to this record. Thus, usrloc modparam
+   use_domain = 1 must be set for this to work.
+     _________________________________________________________
+
+1.9. METADATA_READONLY (optional)
+
+   The METADATA_READONLY row contains a boolean 0 or 1. By
+   default, its value is 0. On startup the DB will open initially
+   as read-write (loads metadata) and then if this is set=1, it
+   will close and reopen as read only (ro). I found this useful
+   because readonly has impacts on the internal db locking etc.
+     _________________________________________________________
+
+1.10. METADATA_LOGFLAGS (optional)
+
+   The METADATA_LOGFLAGS row contains a bitfield that customizes
+   the journaling on a per table basis. If not present the
+   default value is taken as 0. Here are the masks so far (taken
+   from sc_lib.h):
+
+   Example 1-10. METADATA_LOGFLAGS
+#define JLOG_NONE 0
+#define JLOG_INSERT 1
+#define JLOG_DELETE 2
+#define JLOG_UPDATE 4
+#define JLOG_STDOUT 8
+#define JLOG_SYSLOG 16
+
+   This means that if you want to journal INSERTS to local file
+   and syslog the value should be set to 1+16=17. Or if you do
+   not want to journal at all, set this to 0.
+     _________________________________________________________
+
+1.11. Maintaince Shell Script : db_berkeley.sh
+
+   The db_berkeley.sh is located in the
+   [openser_root_dir]/scripts directory. The script will print
+   help when invoked without parameters on the command line. The
+   following is the help text.
+
+   Script for maintaining OpenSER Berkeley DB tables
+
+   Example 1-11. db_berkeley.sh usageS
+usage: db_berkeley.sh create   [DB_HOME] (creates the db with files wit
+h metadata)
+       db_berkeley.sh presence [DB_HOME] (adds the presence related tab
+les)
+       db_berkeley.sh extra    [DB_HOME] (adds the extra tables - imc,c
+pl,siptrace,domainpolicy)
+       db_berkeley.sh drop     [DB_HOME] (deletes db files in DB_HOME)
+       db_berkeley.sh reinit   [DB_HOME] (drop and create tables in one
+ step)
+       db_berkeley.sh list     [DB_HOME] (lists the underlying db files
+ on the FS)
+       db_berkeley.sh backup   [DB_HOME] (tars current database)
+       db_berkeley.sh restore   bu [DB_HOME] (untar bu into DB_HOME)
+       db_berkeley.sh dump      db [DB_HOME] (db_dump the underlying db
+ file to STDOUT)
+       db_berkeley.sh swap      db [DB_HOME] (installs db.new by db ->
+db.old; db.new -> db)
+       db_berkeley.sh newappend db datafile [DB_HOME] (appends data to
+a new instance of db; output DB_HOME/db.new)
+     _________________________________________________________
+
+1.12. DB Recovery : bdb_recover
+
+   The db_berkeley module uses the Concurrent Data Store (CDS)
+   architecture. As such, no transaction or journaling is
+   provided by the DB natively. The application bdb_recover is
+   specifically written to recover data from journal files that
+   OpenSER creates. The bdb_recover application requires an
+   additional text file that contains the table schema.
+
+   The schema is loaded with the '-s' option and is required for
+   all operations.
+
+   The '-h' home option is the DB_HOME path. Unlike the Berkeley
+   utilities, this application does not look for the DB_HOME
+   environment variable, so you have to specify it. If not
+   specified, it will assume the current working directory. The
+   last argument is the operation. There are fundamentally only
+   two operations- create and recover.
+
+   The following illustrates the four operations available to the
+   administrator.
+
+   Example 1-12. bdb_recover usage
+usage: ./bdb_recover -s schemafile [-h home] [-c tablename]
+        This will create a brand new DB file with metadata.
+
+usage: ./bdb_recover -s schemafile [-h home] [-C all]
+        This will create all the core tables, each with metadata.
+
+usage: ./bdb_recover -s schemafile [-h home] [-r journal-file]
+        This will rebuild a DB and populate it with operation from jour
+nal-file.
+        The table name is embedded in the journal-file name by conventi
+on.
+
+usage: ./bdb_recover -s schemafile [-h home] [-R lastN]
+        This will iterate over all core tables enumerated. If journal f
+iles exist in 'home',
+        a new DB file will be created and populated with the data found
+ in the last N files.
+        The files are 'replayed' in chronological order (oldest to newe
+st). This
+        allows the administrator to rebuild the db with a subset of all
+ possible
+        operations if needed. For example, you may only be interested i
+n
+        the last hours data in table location.
+
+   It is important to note that the corrupted DB file must be
+   moved out of the way before bdb_recover is executed.
+     _________________________________________________________
+
+Chapter 2. Developer's Guide
+
+   The module does not provide any API to use in other OpenSER
+   modules.
+     _________________________________________________________
+
+Chapter 3. Frequently Asked Questions
+
+   3.1. Where can I find more about OpenSER?
+   3.2. Where can I post a question about this module?
+   3.3. How can I report a bug?
+
+   3.1. Where can I find more about OpenSER?
+
+   Take a look at http://openser.org/.
+
+   3.2. Where can I post a question about this module?
+
+   First at all check if your question was already answered on
+   one of our mailing lists:
+
+     * User Mailing List -
+       http://openser.org/cgi-bin/mailman/listinfo/users
+     * Developer Mailing List -
+       http://openser.org/cgi-bin/mailman/listinfo/devel
+
+   E-mails regarding any stable OpenSER release should be sent to
+   <[email protected]> and e-mails regarding development versions
+   should be sent to <[email protected]>.
+
+   If you want to keep the mail private, send it to
+   <[email protected]>.
+
+   3.3. How can I report a bug?
+
+   Please follow the guidelines provided at:
+   http://sourceforge.net/tracker/?group_id=139143.

+ 1224 - 0
modules/db_berkeley/bdb_lib.c

@@ -0,0 +1,1224 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <time.h>
+#include <sys/types.h>
+#include <dirent.h>
+#include "../../ut.h"
+#include "../../mem/mem.h"
+#include "../../dprint.h"
+
+#include "bdb_util.h"
+#include "bdb_lib.h"
+#include "bdb_val.h"
+
+static database_p *_cachedb = NULL;
+static db_parms_p _db_parms = NULL;
+
+/**
+ *
+ */
+int sclib_init(db_parms_p _p) 
+{
+	if (!_cachedb)
+	{
+		_cachedb = pkg_malloc( sizeof(database_p) );
+		if (!_cachedb) 
+		{	LOG(L_CRIT,"sclib_init: no enough pkg mem\n");
+			return -1;
+		}
+		
+		*_cachedb = NULL;
+		
+		/*create default parms*/
+		db_parms_p dp = (db_parms_p) pkg_malloc( sizeof(db_parms_t) );
+		if (!dp) 
+		{	LOG(L_CRIT,"sclib_init: no enough pkg mem\n");
+			return -1;
+		}
+		
+		if(_p)
+		{
+			dp->cache_size  = _p->cache_size;
+			dp->auto_reload = _p->auto_reload;
+			dp->log_enable  = _p->log_enable;
+			dp->journal_roll_interval = _p->journal_roll_interval;
+		}
+		else
+		{
+			dp->cache_size = (4 * 1024 * 1024); //4Mb
+			dp->auto_reload = 0;
+			dp->log_enable = 0;
+			dp->journal_roll_interval = 3600;
+		}
+		
+		_db_parms = dp;
+	}
+	return 0;
+}
+
+
+/**
+ * close all DBs and then the DBENV; free all memory
+ */
+int sclib_destroy(void)
+{
+	if (_cachedb)	db_free(*_cachedb);
+	if(_db_parms)	pkg_free(_db_parms);
+	return 0;
+}
+
+
+/** closes the underlying Berkeley DB.
+  assumes the lib data-structures are already initialzed;
+  used to sync and reload the db file.
+*/
+int sclib_close(char* _n)
+{
+	str s;
+	int rc;
+	tbl_cache_p _tbc;
+	DB* _db = NULL;
+	DB_ENV* _env = NULL;
+	database_p _db_p = *_cachedb;
+	
+	if (!_cachedb || !_n)
+		return -1;
+	
+	rc = 0;
+	s.s = (char*)_n;
+	s.len = strlen(_n);
+	
+	if (_db_p)
+	{	
+		_env = _db_p->dbenv;
+		_tbc = _db_p->tables;
+		
+		if(s.len == _db_p->name.len && 
+		!strncasecmp(s.s, _db_p->name.s, _db_p->name.len))
+		{
+			//close the whole dbenv
+			DBG("-- sclib_close ENV %.*s \n", s.len, s.s);
+			while(_tbc)
+			{
+				if(_tbc->dtp)
+				{
+					lock_get(&_tbc->dtp->sem);
+					_db = _tbc->dtp->db;
+					if(_db)
+						rc = _db->close(_db, 0);
+					if(rc != 0)
+						LOG(L_CRIT,"lib_close: error closing %s\n"
+							, _tbc->dtp->name.s);
+					_tbc->dtp->db = NULL;
+					
+					lock_release(&_tbc->dtp->sem);
+				}
+				_tbc = _tbc->next;
+			}
+			_env->close(_env, 0);
+			_db_p->dbenv = NULL;
+			return 0;
+		}
+		
+		//close a particular db
+		while(_tbc)
+		{
+			if(_tbc->dtp)
+			{
+				DBG("-- sclib_close DB %.*s \n", s.len, s.s);
+				if(_tbc->dtp->name.len == s.len && 
+				!strncasecmp(_tbc->dtp->name.s, s.s, s.len ))
+				{
+					lock_get(&_tbc->dtp->sem);
+					_db = _tbc->dtp->db;
+					if(_db)
+						rc = _db->close(_db, 0);
+					if(rc != 0)
+						LOG(L_CRIT,"lib_close: error closing %s\n"
+							, _tbc->dtp->name.s);
+					_tbc->dtp->db = NULL;
+					lock_release(&_tbc->dtp->sem);
+					return 0;
+				}
+			}
+			_tbc = _tbc->next;
+		}
+	}
+	
+	return 0;
+}
+
+/** opens the underlying Berkeley DB.
+  assumes the lib data-structures are already initialzed;
+  used to sync and reload the db file.
+*/
+int sclib_reopen(char* _n)
+{
+	str s;
+	int rc, flags;
+	tbl_cache_p _tbc;
+	DB* _db = NULL;
+	DB_ENV* _env = NULL;
+	database_p _db_p = *_cachedb;
+	rc = flags = 0;
+	_tbc = NULL;
+	
+	if (!_cachedb || !_n)
+		return -1;
+
+	s.s = (char*)_n;
+	s.len = strlen(_n);
+	
+	if (_db_p)
+	{
+		_env = _db_p->dbenv;
+		_tbc = _db_p->tables;
+		
+		if(s.len ==_db_p->name.len && 
+		!strncasecmp(s.s, _db_p->name.s,_db_p->name.len))
+		{
+			//open the whole dbenv
+			DBG("-- sclib_reopen ENV %.*s \n", s.len, s.s);
+			if(!_db_p->dbenv)
+			{	rc = sclib_create_dbenv(&_env, _n);
+				_db_p->dbenv = _env;
+			}
+			
+			if(rc!=0) return rc;
+			_env = _db_p->dbenv;
+			_tbc = _db_p->tables;
+
+			while(_tbc)
+			{
+				if(_tbc->dtp)
+				{
+					lock_get(&_tbc->dtp->sem);
+					if(!_tbc->dtp->db)
+					{
+						if ((rc = db_create(&_db, _env, 0)) != 0)
+						{	_env->err(_env, rc, "db_create");
+							LOG(L_CRIT, "sclib_reopen: error in db_create.\n");
+							LOG(L_CRIT, "sclib_reopen: db error: %s.\n",db_strerror(rc));
+							sclib_recover(_tbc->dtp, rc);
+						}
+					}
+					
+					if ((rc = _db->open(_db, NULL, _n, NULL, DB_HASH, DB_CREATE, 0664)) != 0)
+					{	_db->dbenv->err(_env, rc, "DB->open: %s", _n);
+						LOG(L_CRIT, "sclib_reopen:bdb open: %s.\n",db_strerror(rc));
+						sclib_recover(_tbc->dtp, rc);
+					}
+					
+					_tbc->dtp->db = _db;
+					lock_release(&_tbc->dtp->sem);
+				}
+				_tbc = _tbc->next;
+			}
+			_env->close(_env, 0);
+			return rc;
+		}
+		
+		//open a particular db
+		while(_tbc)
+		{
+			if(_tbc->dtp)
+			{
+				DBG("-- sclib_reopen DB %.*s \n", s.len, s.s);
+				if(_tbc->dtp->name.len == s.len && 
+				!strncasecmp(_tbc->dtp->name.s, s.s, s.len ))
+				{
+					lock_get(&_tbc->dtp->sem);
+					if(!_tbc->dtp->db) 
+					{
+						if ((rc = db_create(&_db, _env, 0)) != 0)
+						{	_env->err(_env, rc, "db_create");
+							LOG(L_CRIT, "sclib_reopen: error in db_create.\n");
+							LOG(L_CRIT, "sclib_reopen: db error: %s.\n",db_strerror(rc));
+							sclib_recover(_tbc->dtp, rc);
+						}
+					}
+					
+					if ((rc = _db->open(_db, NULL, _n, NULL, DB_HASH, DB_CREATE, 0664)) != 0)
+					{	_db->dbenv->err(_env, rc, "DB->open: %s", _n);
+						LOG(L_CRIT, "sclib_reopen:bdb open: %s.\n",db_strerror(rc));
+						sclib_recover(_tbc->dtp, rc);
+					}
+					_tbc->dtp->db = _db;
+					lock_release(&_tbc->dtp->sem);
+					return rc;
+				}
+			}
+			_tbc = _tbc->next;
+		}
+	}
+
+	return 0;
+}
+
+
+/**
+ *
+ */
+int sclib_create_dbenv(DB_ENV **_dbenv, char* _home)
+{
+	DB_ENV *env;
+	char *progname;
+	int rc, flags;
+	
+	progname = "openser";
+	
+	/* Create an environment and initialize it for additional error * reporting. */ 
+	if ((rc = db_env_create(&env, 0)) != 0) 
+	{
+		LOG(L_ERR, "sclib_create_dbenv: db_env_create failed !\n");
+		LOG(L_ERR, "sc_lib:bdb error: %s.\n",db_strerror(rc)); 
+		return (rc);
+	}
+ 
+	env->set_errpfx(env, progname);
+
+	/*  Specify the shared memory buffer pool cachesize */ 
+	if ((rc = env->set_cachesize(env, 0, _db_parms->cache_size, 0)) != 0) 
+	{
+		LOG(L_ERR, "sclib_create_dbenv: dbenv set_cachsize failed !\n");
+		LOG(L_ERR, "sc_lib:bdb error: %s.\n",db_strerror(rc));
+		env->err(env, rc, "set_cachesize"); 
+		goto err; 
+	}
+
+	/* Concurrent Data Store flags */
+	flags = DB_CREATE |
+		DB_INIT_CDB |
+		DB_INIT_MPOOL |
+		DB_THREAD;
+	
+	/* Transaction Data Store flags ; not supported yet */
+	/*
+	flags = DB_CREATE |
+		DB_RECOVER |
+		DB_INIT_LOG | 
+		DB_INIT_LOCK |
+		DB_INIT_MPOOL |
+		DB_THREAD |
+		DB_INIT_TXN;
+	*/
+	
+	/* Open the environment */ 
+	if ((rc = env->open(env, _home, flags, 0)) != 0) 
+	{ 
+		LOG(L_ERR, "sclib_create_dbenv: dbenv is not initialized!\n");
+		LOG(L_ERR, "sc_lib:bdb error: %s.\n",db_strerror(rc));
+		env->err(env, rc, "environment open: %s", _home); 
+		goto err; 
+	}
+	
+	*_dbenv = env;
+	return (0);
+
+err: (void)env->close(env, 0);
+	return (rc);
+}
+
+
+/**
+ */
+database_p sclib_get_db(str *_s)
+{
+	int rc;
+	database_p _db_p=NULL;
+	char name[512];
+
+	if(!_s || !_s->s || _s->len<=0 || _s->len > 512)
+		return NULL;
+
+	if( !_cachedb)
+	{
+		LOG(L_ERR, "sclib_get_db: _cachedb is not initialized!\n");
+		return NULL;
+	}
+
+	_db_p = *_cachedb;
+	if(_db_p)
+	{
+		DBG("sclib_get_db: db already cached!\n");
+		return _db_p;
+	}
+
+	if(!sc_is_database(_s))
+	{	
+		LOG(L_ERR, "sclib_get_db: database [%.*s] does not exists!\n"
+			,_s->len , _s->s);
+		return NULL;
+	}
+
+	_db_p = (database_p)pkg_malloc(sizeof(database_t));
+	if(!_db_p)
+	{
+		LOG(L_ERR, "sclib_get_db: no memory for dbenv_t.\n");
+		pkg_free(_db_p);
+		return NULL;
+	}
+
+	_db_p->name.s = (char*)pkg_malloc(_s->len*sizeof(char));
+	memcpy(_db_p->name.s, _s->s, _s->len);
+	_db_p->name.len = _s->len;
+
+	strncpy(name, _s->s, _s->len);
+	name[_s->len] = 0;
+
+	if ((rc = sclib_create_dbenv(&(_db_p->dbenv), name)) != 0)
+	{
+		LOG(L_ERR, "sclib_get_db: sclib_create_dbenv failed");
+		pkg_free(_db_p->name.s);
+		pkg_free(_db_p);
+		return NULL;
+	}
+
+	_db_p->tables=NULL;
+	*_cachedb = _db_p;
+
+	return _db_p;
+}
+
+
+/**
+ * look thru a linked list for the table. if dne, create a new one
+ * and add to the list
+*/
+tbl_cache_p sclib_get_table(database_p _db, str *_s)
+{
+	tbl_cache_p _tbc = NULL;
+	table_p _tp = NULL;
+
+	if(!_db || !_s || !_s->s || _s->len<=0)
+		return NULL;
+
+	if(!_db->dbenv)
+	{
+		return NULL;
+	}
+
+	_tbc = _db->tables;
+	while(_tbc)
+	{
+		if(_tbc->dtp)
+		{
+
+			if(_tbc->dtp->name.len == _s->len 
+				&& !strncasecmp(_tbc->dtp->name.s, _s->s, _s->len ))
+			{
+				return _tbc;
+			}
+		}
+		_tbc = _tbc->next;
+	}
+
+	_tbc = (tbl_cache_p)pkg_malloc(sizeof(tbl_cache_t));
+	if(!_tbc)
+		return NULL;
+
+	if(!lock_init(&_tbc->sem))
+	{
+		pkg_free(_tbc);
+		return NULL;
+	}
+
+	_tp = sclib_create_table(_db, _s);
+
+#ifdef SC_EXTRA_DEBUG
+	DBG("DBG:sclib_get_table: %.*s\n", _s->len, _s->s);
+#endif
+
+	if(!_tp)
+	{
+		LOG(L_ERR, "sclib_get_table: failed to create table.\n");
+		pkg_free(_tbc);
+		return NULL;
+	}
+
+	lock_get(&_tbc->sem);
+	_tbc->dtp = _tp;
+
+	if(_db->tables)
+		(_db->tables)->prev = _tbc;
+	
+	_tbc->next = _db->tables;
+	_db->tables = _tbc;
+	lock_release(&_tbc->sem);
+
+	return _tbc;
+}
+
+
+void sclib_log(int op, table_p _tp, char* _msg, int len)
+{
+	if(!_tp || !len) 		return;
+	if(! _db_parms->log_enable) 	return;
+	if (_tp->logflags == JLOG_NONE)	return;
+	
+	if ((_tp->logflags & op) == op)
+	{	int op_len=7;
+		char buf[MAX_ROW_SIZE + op_len];
+		char *c;
+		time_t now = time(NULL);
+		
+		if( _db_parms->journal_roll_interval)
+		{
+			if((_tp->t) && (now - _tp->t) > _db_parms->journal_roll_interval)
+			{	/*try to roll logfile*/
+				if(sclib_create_journal(_tp))
+				{
+					LOG(L_ERR, "sclib_log: Journaling has FAILED !\n");
+					return;
+				}
+			}
+		}
+		
+		c = buf;
+		switch (op)
+		{
+		case JLOG_INSERT:
+			strncpy(c, "INSERT|", op_len);
+			break;
+		case JLOG_UPDATE:
+			strncpy(c, "UPDATE|", op_len);
+			break;
+		case JLOG_DELETE:
+			strncpy(c, "DELETE|", op_len);
+			break;
+		}
+		
+		c += op_len;
+		strncpy(c, _msg, len);
+		c +=len;
+		*c = '\n';
+		c++;
+		*c = '\0';
+		
+		if ((_tp->logflags & JLOG_STDOUT) == JLOG_STDOUT)
+			puts(buf);
+		
+		if ((_tp->logflags & JLOG_SYSLOG) == JLOG_SYSLOG)
+			syslog(LOG_LOCAL6, buf);
+		
+		if(_tp->fp) 
+		{
+			if(!fputs(buf, _tp->fp) )
+				fflush(_tp->fp);
+		}
+	}
+}
+
+/**
+ *
+ */
+table_p sclib_create_table(database_p _db, str *_s)
+{
+
+	int rc,i,flags;
+	DB *bdb = NULL;
+	table_p tp = NULL;
+	char tblname[MAX_TABLENAME_SIZE]; 
+
+	if(!_db || !_db->dbenv)
+	{
+		LOG(L_ERR, "sclib_create_table: no database_p or dbenv.\n");
+		return NULL;
+	}
+
+	tp = (table_p)pkg_malloc(sizeof(table_t));
+	if(!tp)
+	{
+		LOG(L_ERR, "sclib_create_table: no memory for table_t.\n");
+		return NULL;
+	}
+
+	if ((rc = db_create(&bdb, _db->dbenv, 0)) != 0)
+	{ 
+		_db->dbenv->err(_db->dbenv, rc, "database create");
+		LOG(L_ERR, "sclib_create_table: error in db_create.\n");
+		LOG(L_ERR, "sc_lib:bdb error: %s.\n",db_strerror(rc));
+		pkg_free(tp);
+		return NULL;
+	}
+
+	memset(tblname, 0, MAX_TABLENAME_SIZE);
+	strncpy(tblname, _s->s, _s->len);
+
+#ifdef SC_EXTRA_DEBUG
+	DBG("-----------------------------------\n");
+	DBG("------- CREATE TABLE = %s\n", tblname);
+	DBG("-----------------------------------\n");
+#endif
+
+	flags = DB_CREATE | DB_THREAD;
+
+	if ((rc = bdb->open(bdb, NULL, tblname, NULL, DB_HASH, flags, 0664)) != 0)
+	{ 
+		_db->dbenv->err(_db->dbenv, rc, "DB->open: %s", tblname);
+		LOG(L_ERR, "sclib_create_table:bdb open: %s.\n",db_strerror(rc));
+		pkg_free(tp);
+		return NULL;
+	}
+
+	if(!lock_init(&tp->sem))
+	{
+		goto error;
+	}
+	
+	tp->name.s = (char*)pkg_malloc(_s->len*sizeof(char));
+	memcpy(tp->name.s, _s->s, _s->len);
+	tp->name.len = _s->len;
+	tp->db=bdb;
+	tp->ncols=0;
+	tp->nkeys=0;
+	tp->ro=0;    /*0=ReadWrite ; 1=ReadOnly*/
+	tp->ino=0;   /*inode*/
+	tp->logflags=0; /*bitmap; 4=Delete, 2=Update, 1=Insert, 0=None*/
+	tp->fp=0;
+	tp->t=0;
+	
+	for(i=0;i<MAX_NUM_COLS;i++)
+		tp->colp[i] = NULL;
+
+	/*load metadata; seeded\db_loaded when database are created*/
+	
+	/*initialize columns with metadata*/
+	rc = load_metadata_columns(tp);
+	if(rc!=0)
+	{
+		LOG(L_ERR, "sclib_create_table: FAILED to load METADATA COLS in table: %s.\n", tblname);
+		goto error;
+	}
+
+	rc = load_metadata_keys(tp);
+	if(rc!=0)
+	{
+		LOG(L_ERR, "sclib_create_table: FAILED to load METADATA KEYS in table: %s.\n", tblname);
+		/*will have problems later figuring column types*/
+		goto error;
+	}
+
+	/*opened RW by default; Query to set the RO flag */
+	rc = load_metadata_readonly(tp);
+	if(rc!=0)
+	{
+		LOG(L_INFO, "sclib_create_table: No METADATA_READONLY in table: %s.\n", tblname);
+		/*non-critical; table will default to READWRITE*/
+	}
+
+	if(tp->ro)
+	{	
+		/*schema defines this table RO readonly*/
+#ifdef SC_EXTRA_DEBUG
+		DBG("TABLE %.*s is changing to READONLY mode\n"
+			, tp->name.len, tp->name.s);
+#endif
+		
+		if ((rc = bdb->close(bdb,0)) != 0)
+		{ 
+			_db->dbenv->err(_db->dbenv, rc, "DB->close: %s", tblname);
+			LOG(L_ERR, "sclib_create_table:bdb close: %s.\n",db_strerror(rc));
+			goto error;
+		}
+		
+		bdb = NULL;
+		if ((rc = db_create(&bdb, _db->dbenv, 0)) != 0)
+		{ 
+			_db->dbenv->err(_db->dbenv, rc, "database create");
+			LOG(L_ERR, "sclib_create_table: error in db_create.\n");
+			goto error;
+		}
+		
+		flags = DB_THREAD | DB_RDONLY;
+		if ((rc = bdb->open(bdb, NULL, tblname, NULL, DB_HASH, flags, 0664)) != 0)
+		{ 
+			_db->dbenv->err(_db->dbenv, rc, "DB->open: %s", tblname);
+			LOG(L_ERR, "sclib_create_table:bdb open: %s.\n",db_strerror(rc));
+			goto error;
+		}
+		tp->db=bdb;
+	}
+	
+	/* set the journaling flags; flags indicate which operations
+	   need to be journalled. (e.g possible to only journal INSERT.)
+	*/
+	rc = load_metadata_logflags(tp);
+	if(rc!=0)
+		LOG(L_INFO, "sclib_create_table: No METADATA_LOGFLAGS in table: %s.\n", tblname);
+	
+	if ((tp->logflags & JLOG_FILE) == JLOG_FILE)
+		sclib_create_journal(tp);
+	
+	return tp;
+	
+error:
+	if(tp) 
+	{
+		pkg_free(tp->name.s);
+		pkg_free(tp);
+	}
+	return NULL;
+}
+
+int sclib_create_journal(table_p _tp)
+{
+	char *s;
+	char fn[1024];
+	char d[64];
+	FILE *fp = NULL;
+	struct tm *t;
+	int bl;
+	database_p _db_p = *_cachedb;
+	time_t tim = time(NULL);
+	
+	if(! _db_p || ! _tp) return -1;
+	if(! _db_parms->log_enable) return 0;
+	/* journal filename ; e.g. '/var/openser/db/location-YYYYMMDDhhmmss.jnl' */
+	s=fn;
+	strncpy(s, _db_p->name.s, _db_p->name.len);
+	s+=_db_p->name.len;
+	
+	*s = '/';
+	s++;
+	
+	strncpy(s, _tp->name.s, _tp->name.len);
+	s+=_tp->name.len;
+	
+	t = localtime( &tim );
+	bl=strftime(d,128,"-%Y%m%d%H%M%S.jnl",t);
+	strncpy(s, d, bl);
+	s+= bl;
+	*s = 0;
+	
+	if(_tp->fp)
+	{	/* must be rolling. */
+		if( fclose(_tp->fp) )
+		{	LOG(L_ERR, "sclib_create_journal: Failed to Close Log in table: %.*s .\n"
+				,_tp->name.len, _tp->name.s);
+			return -1;
+		}
+	}
+	
+	if( (fp = fopen(fn, "w")) != NULL )
+	{
+		_tp->fp = fp;
+	}
+	else
+	{
+		LOG(L_ERR, "sclib_create_journal: Failed to Open Log in table: %.*s .\n"
+			,_tp->name.len, _tp->name.s);
+		return -1;
+	}
+	
+	_tp->t = tim;
+	return 0;
+
+}
+
+int load_metadata_columns(table_p _tp)
+{
+	int ret,n,len;
+	char dbuf[MAX_ROW_SIZE];
+	char *s = NULL;
+	char cn[64], ct[16];
+	DB *db = NULL;
+	DBT key, data;
+	column_p col;
+	ret = n = len = 0;
+	
+	if(!_tp || !_tp->db)
+		return -1;
+	
+	if(_tp->ncols!=0)
+		return 0;
+	
+	db = _tp->db;
+	memset(&key, 0, sizeof(DBT));
+	memset(&data, 0, sizeof(DBT));
+	memset(dbuf, 0, MAX_ROW_SIZE);
+
+	key.data = METADATA_COLUMNS;
+	key.size = strlen(METADATA_COLUMNS);
+
+	/*memory for the result*/
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+	
+	if ((ret = db->get(db, NULL, &key, &data, 0)) != 0) 
+	{
+		db->err(db, ret, "load_metadata_columns DB->get failed");
+		LOG(L_ERR, "load_metadata_columns: FAILED to find METADATA_COLUMNS in DB \n");
+		return -1;
+	}
+
+	/* eg: dbuf = "table_name(str) table_version(int)" */
+	s = strtok(dbuf, " ");
+	while(s!=NULL && n<MAX_NUM_COLS) 
+	{
+		/* eg: meta[0]=table_name  meta[1]=str */
+		sscanf(s,"%20[^(](%10[^)])[^\n]", cn, ct);
+		
+		/* create column*/
+		col = (column_p) pkg_malloc(sizeof(column_t));
+		if(!col)
+		{	LOG(L_ERR, "load_metadata_columns: out of memory \n");
+			return -1;
+		}
+		
+		/* set name*/
+		len = strlen( cn );
+		col->name.s = (char*)pkg_malloc(len * sizeof(char));
+		memcpy(col->name.s, cn, len );
+		col->name.len = len;
+		
+		/*set column type*/
+		if(strncmp(ct, "str", 3)==0)
+		{	col->type = DB_STRING;
+		}
+		else if(strncmp(ct, "int", 3)==0)
+		{	col->type = DB_INT;
+		}
+		else if(strncmp(ct, "double", 6)==0)
+		{	col->type = DB_DOUBLE;
+		}
+		else if(strncmp(ct, "datetime", 8)==0)
+		{	col->type = DB_DATETIME;
+		}
+		else
+		{	col->type = DB_STRING;
+		}
+		
+		col->flag = 0;
+		_tp->colp[n] = col;
+		n++;
+		_tp->ncols++;
+		s=strtok(NULL, " ");
+	}
+
+	return 0;
+}
+
+int load_metadata_keys(table_p _tp)
+{
+	int ret,n,ci;
+	char dbuf[MAX_ROW_SIZE];
+	char *s = NULL;
+	DB *db = NULL;
+	DBT key, data;
+	ret = n = ci = 0;
+	
+	if(!_tp || !_tp->db)
+		return -1;
+	
+	db = _tp->db;
+	memset(&key, 0, sizeof(DBT));
+	memset(&data, 0, sizeof(DBT));
+	memset(dbuf, 0, MAX_ROW_SIZE);
+	key.data = METADATA_KEY;
+	key.size = strlen(METADATA_KEY);
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+	
+	if ((ret = db->get(db, NULL, &key, &data, 0)) != 0) 
+	{
+		db->err(db, ret, "load_metadata_keys DB->get failed");
+		LOG(L_ERR, "load_metadata_keys: FAILED to find METADATA in table \n");
+		return ret;
+	}
+	
+	s = strtok(dbuf, " ");
+	while(s!=NULL && n< _tp->ncols) 
+	{	ret = sscanf(s,"%i", &ci);
+		if(ret != 1) return -1;
+		if( _tp->colp[ci] ) 
+		{	_tp->colp[ci]->flag = 1;
+			_tp->nkeys++;
+		}
+		n++;
+		s=strtok(NULL, " ");
+	}
+
+	return 0;
+}
+
+
+int load_metadata_readonly(table_p _tp)
+{
+	int i, ret;
+	char dbuf[MAX_ROW_SIZE];
+
+	DB *db = NULL;
+	DBT key, data;
+	i = 0;
+	
+	if(!_tp || !_tp->db)
+		return -1;
+	
+	db = _tp->db;
+	memset(&key, 0, sizeof(DBT));
+	memset(&data, 0, sizeof(DBT));
+	memset(dbuf, 0, MAX_ROW_SIZE);
+	key.data = METADATA_READONLY;
+	key.size = strlen(METADATA_READONLY);
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+	
+	if ((ret = db->get(db, NULL, &key, &data, 0)) != 0) 
+	{	return ret;
+	}
+	
+	if( 1 == sscanf(dbuf,"%i", &i) )
+		_tp->ro=(i>0)?1:0;
+	
+	return 0;
+}
+
+int load_metadata_logflags(table_p _tp)
+{
+	int i, ret;
+	char dbuf[MAX_ROW_SIZE];
+
+	DB *db = NULL;
+	DBT key, data;
+	i = 0;
+	
+	if(!_tp || !_tp->db)
+		return -1;
+	
+	db = _tp->db;
+	memset(&key, 0, sizeof(DBT));
+	memset(&data, 0, sizeof(DBT));
+	memset(dbuf, 0, MAX_ROW_SIZE);
+	key.data = METADATA_LOGFLAGS;
+	key.size = strlen(METADATA_LOGFLAGS);
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+	
+	if ((ret = db->get(db, NULL, &key, &data, 0)) != 0) 
+	{	return ret;
+	}
+	
+	if( 1 == sscanf(dbuf,"%i", &i) )
+		_tp->logflags=i;
+	
+	return 0;
+}
+
+
+/*creates a composite key _k of length _klen from n values of _v;
+  provide your own initialized memory for target _k and _klen;
+  resulting value: _k = "KEY1 | KEY2"
+  ko = key only
+*/
+int sclib_valtochar(table_p _tp, int* _lres, char* _k, int* _klen, db_val_t* _v, int _n, int _ko)
+{
+	char *p; 
+	char sk[MAX_ROW_SIZE]; // subkey(sk) val
+	char* delim = DELIM;
+	char* cNULL = "NULL";
+	int  len, total, sum;
+	int i, j, k;
+	p =  _k;
+	len = sum = total = 0;
+	i = j = k = 0;
+	
+	if(!_tp) return -1;
+	if(!_v || (_n<1) ) return -1;
+	if(!_k || !_klen ) return -1;
+	if( *_klen < 1)    return -1;
+	
+	memset(sk, 0, MAX_ROW_SIZE);
+	total = *_klen;
+	*_klen = 0; //sum
+	
+	if(! _lres)
+	{	
+#ifdef SC_EXTRA_DEBUG
+		DBG("-------------------------------------------------\n");
+		DBG("-- sclib_valtochar: schema has NOT specified any keys! \n");
+		DBG("-------------------------------------------------\n");	
+#endif
+
+		/* schema has not specified keys
+		   just use the provided data in order provided
+		*/
+		for(i=0;i<_n;i++)
+		{	len = total - sum;
+			if ( sc_val2str(&_v[i], sk, &len) != 0 ) 
+			{	LOG(L_ERR, "sclib_makekey: error building composite key \n");
+				return -2;
+			}
+
+			sum += len;
+			if(sum > total)
+			{	LOG(L_ERR, "[sclib_makekey]: Destination buffer too short for subval %s\n",sk);
+				return -2;
+			} 
+
+			/* write sk */
+			strncpy(p, sk, len);
+			p += len;
+			*_klen = sum;
+
+			sum += DELIM_LEN;
+			if(sum > total)
+			{	LOG(L_ERR, "[sclib_makekey]: Destination buffer too short for delim \n");
+				return -3;
+			}
+			
+			/* write delim */
+			strncpy(p, delim, DELIM_LEN);
+			p += DELIM_LEN;
+			*_klen = sum;;
+		}
+		return 0;
+	}
+
+
+	/*
+	  schema has specified keys
+	  verify all schema keys are provided
+	  use 'NULL' for those that are missing.
+	*/
+	for(i=0; i<_tp->ncols; i++)
+	{	/* i indexes columns in schema order */
+		if( _ko)
+		{	/* keymode; skip over non-key columns */
+			if( ! _tp->colp[i]->flag) 
+				continue; 
+		}
+		
+		for(j=0; j<_n; j++)
+		{	
+			/*
+			  j indexes the columns provided in _k
+			  which may be less than the total required by
+			  the schema. the app does not know the order
+			  of the columns in our schema!
+			 */
+			k = (_lres) ? _lres[j] : j;
+			
+			/*
+			 * k index will remap back to our schema order; like i
+			 */
+			if(i == k)
+			{
+				/*
+				 KEY was provided; append to buffer;
+				 _k[j] contains a key, but its a key that 
+				 corresponds to column k of our schema.
+				 now we know its a match, and we dont need
+				 index k for anything else
+				*/
+#ifdef SC_EXTRA_DEBUG
+				DBG("-- KEY PROVIDED[%i]: %.*s.%.*s \n", i 
+					, _tp->name.len , ZSW(_tp->name.s) 
+					, _tp->colp[i]->name.len, ZSW(_tp->colp[i]->name.s)
+				   );
+#endif
+
+				len = total - sum;
+				if ( sc_val2str(&_v[j], sk, &len) != 0)
+				{	LOG(L_ERR, "[sclib_makekey]: Destination buffer too short for subval %s\n",sk);
+					return -4;
+				}
+				
+				sum += len;
+				if(sum > total)
+				{	LOG(L_ERR, "[sclib_makekey]: Destination buffer too short for subval %s\n",sk);
+					return -5;
+				}
+
+				strncpy(p, sk, len);
+				p += len;
+				*_klen = sum;
+
+				sum += DELIM_LEN;
+				if(sum > total)
+				{	LOG(L_ERR, "[sclib_makekey]: Destination buffer too short for delim \n");
+					return -5;
+				} 
+				
+				/* append delim */
+				strncpy(p, delim, DELIM_LEN);
+				p += DELIM_LEN;
+				*_klen = sum;
+				
+				
+				/* take us out of inner for loop
+				   and at the end of the outer loop
+				   to look for our next schema key
+				*/
+				goto next;
+			}
+			
+		}
+
+		/*
+		 NO KEY provided; append a 'NULL' value since i
+		 is considered a key according to our schema.
+		*/
+#ifdef SC_EXTRA_DEBUG
+		DBG("-- Missing KEY[%i]: %.*s.%.*s \n", i
+			, _tp->name.len , ZSW(_tp->name.s) 
+			, _tp->colp[i]->name.len, ZSW(_tp->colp[i]->name.s)
+		   );
+#endif
+		len = strlen(cNULL);
+		sum += len;
+		if(sum > total)
+		{	LOG(L_ERR, "[sclib_makekey]: Destination buffer too short for subval %s\n",cNULL);
+			return -5;
+		}
+		
+		strncpy(p, cNULL, len);
+		p += len;
+		*_klen = sum;
+		
+		sum += DELIM_LEN;
+		if(sum > total)
+		{	LOG(L_ERR, "[sclib_makekey]: Destination buffer too short for delim \n");
+			return -5;
+		} 
+		
+		strncpy(p, delim, DELIM_LEN);
+		p += DELIM_LEN;
+		*_klen = sum;
+next:
+		continue;
+	}
+
+
+
+	return 0;
+}
+
+
+/**
+ *
+ */
+int db_free(database_p _dbp)
+{
+	tbl_cache_p _tbc = NULL, _tbc0=NULL;
+	if(!_dbp)
+		return -1;
+
+	_tbc = _dbp->tables;
+
+	while(_tbc)
+	{
+		_tbc0 = _tbc->next;
+		tbl_cache_free(_tbc);
+		_tbc = _tbc0;
+	}
+	
+	if(_dbp->dbenv)
+		_dbp->dbenv->close(_dbp->dbenv, 0);
+	
+	if(_dbp->name.s)
+		pkg_free(_dbp->name.s);
+	
+	pkg_free(_dbp);
+
+	return 0;
+}
+
+
+/**
+ *
+ */
+int tbl_cache_free(tbl_cache_p _tbc)
+{
+	if(!_tbc)
+		return -1;
+	
+	lock_get(&_tbc->sem);
+	
+	if(_tbc->dtp)
+		tbl_free(_tbc->dtp);
+	
+	lock_destroy(&_tbc->sem);
+	pkg_free(_tbc);
+
+	return 0;
+}
+
+
+/**
+ * close DB (sync data to disk) and free mem
+ */
+int tbl_free(table_p _tp)
+{	int i;
+	if(!_tp)
+		return -1;
+
+	if(_tp->db)
+		_tp->db->close(_tp->db, 0);
+	
+	if(_tp->fp)
+		fclose(_tp->fp);
+
+	if(_tp->name.s)
+		pkg_free(_tp->name.s);
+	
+	for(i=0;i<_tp->ncols;i++)
+	{	if(_tp->colp[i])
+		{	pkg_free(_tp->colp[i]->name.s);
+			pkg_free(_tp->colp[i]);
+		}
+	}
+
+	pkg_free(_tp);
+	return 0;
+}
+
+int sclib_recover(table_p _tp, int _rc)
+{
+	switch(_rc)
+	{
+		case DB_LOCK_DEADLOCK:
+		LOG(L_ERR, "[sclib_recover] DB_LOCK_DEADLOCK detected !!\n");
+		
+		case DB_RUNRECOVERY:
+		LOG(L_ERR, "[sclib_recover] DB_RUNRECOVERY detected !! \n");
+		sclib_destroy();
+		exit(1);
+		break;
+	}
+	
+	return 0;
+}

+ 148 - 0
modules/db_berkeley/bdb_lib.h

@@ -0,0 +1,148 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+
+#ifndef _BDB_LIB_H_
+#define _BDB_LIB_H_
+
+#include <stdlib.h>
+#include <syslog.h>
+#include <sys/stat.h>
+#include <db.h>
+
+#include "../../str.h"
+#include "../../db/db.h"
+#include "../../db/db_val.h"
+#include "../../locking.h"
+
+/*max number of columns in a table*/
+#define MAX_NUM_COLS 32
+
+/*max char width of a table row*/
+#define MAX_ROW_SIZE 2048
+
+/*max char width of a table name*/
+#define MAX_TABLENAME_SIZE 64
+
+#define METADATA_COLUMNS "METADATA_COLUMNS"
+#define METADATA_KEY "METADATA_KEY"
+#define METADATA_READONLY "METADATA_READONLY"
+#define METADATA_LOGFLAGS "METADATA_LOGFLAGS"
+
+/*journal logging flag masks */
+#define JLOG_NONE   0
+#define JLOG_INSERT 1
+#define JLOG_DELETE 2
+#define JLOG_UPDATE 4
+#define JLOG_FILE   8
+#define JLOG_STDOUT 16
+#define JLOG_SYSLOG 32
+
+#define DELIM "|"
+#define DELIM_LEN (sizeof(DELIM)-1)
+
+typedef db_val_t sc_val_t, *sc_val_p;
+
+typedef struct _row
+{
+	sc_val_p fields;
+	struct _row *prev;
+	struct _row *next;
+} row_t, *row_p;
+
+typedef struct _column
+{
+	str name;
+	int type;
+	int flag;
+} column_t, *column_p;
+
+typedef struct _table
+{
+	str name;
+	DB *db;
+	gen_lock_t sem;
+	column_p colp [MAX_NUM_COLS];
+	int ncols;
+	int nkeys;
+	int ro;       /*db readonly flag*/
+	int logflags; /*flags indication what-where to journal log */
+	FILE* fp;     /*jlog file pointer */
+	time_t t;     /*jlog creation time*/
+	ino_t ino;
+} table_t, *table_p;
+
+typedef struct _tbl_cache
+{
+	gen_lock_t sem;
+	table_p dtp;
+	struct _tbl_cache *prev;
+	struct _tbl_cache *next;
+} tbl_cache_t, *tbl_cache_p;
+
+typedef struct _database
+{
+	str name;
+	DB_ENV *dbenv;
+	tbl_cache_p tables;
+} database_t, *database_p;
+
+typedef struct _db_parms
+{
+	u_int32_t cache_size;
+	int auto_reload;
+	int log_enable;
+	int journal_roll_interval;
+} db_parms_t, *db_parms_p;
+
+
+int sclib_init(db_parms_p _parms);
+int sclib_destroy(void);
+int sclib_close(char* _n);
+int sclib_reopen(char* _n);
+int sclib_recover(table_p _tp, int error_code);
+void sclib_log(int op, table_p _tp, char* _msg, int len);
+int sclib_create_dbenv(DB_ENV **dbenv, char* home);
+int sclib_create_journal(table_p _tp);
+database_p  	sclib_get_db(str *_s);
+tbl_cache_p 	sclib_get_table(database_p _db, str *_s);
+table_p 	sclib_create_table(database_p _db, str *_s);
+
+int db_free(database_p _dbp);
+int tbl_cache_free(tbl_cache_p _tbc);
+int tbl_free(table_p _tp);
+
+int load_metadata_columns(table_p _tp);
+int load_metadata_keys(table_p _tp);
+int load_metadata_readonly(table_p _tp);
+int load_metadata_logflags(table_p _tp);
+
+int sclib_valtochar(table_p _tp, int* _lres, char* _k, int* _klen, db_val_t* _v, int _n, int _ko);
+
+#endif

+ 795 - 0
modules/db_berkeley/bdb_res.c

@@ -0,0 +1,795 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <sys/types.h>
+#include "../../mem/mem.h"
+#include "bdb_res.h"
+
+
+/**
+* 
+*/
+int sc_get_columns(table_p _tp, db_res_t* _res, int* _lres, int _nc)
+{
+	int col, len;
+
+        if (!_res) 
+	{	LOG(L_ERR, "sc_get_columns: db_res_t parameter cannot be NULL\n");
+                return -1;
+        }
+
+	if (_nc < 0 ) 
+	{	LOG(L_ERR, "sc_get_columns: _nc parameter cannot be negative \n");
+                return -1;
+        }
+
+        /* the number of rows (tuples) in the query result. */
+	RES_NUM_ROWS(_res) = 1;
+
+        if (!_lres) 
+		_nc = _tp->ncols;
+
+	/* Allocate storage to hold a pointer to each column name */
+        RES_NAMES(_res) = (db_key_t*)pkg_malloc(sizeof(db_key_t) * _nc);
+
+#ifdef SC_EXTRA_DEBUG
+	LOG(L_DBG, "sc_get_columns: %p=pkg_malloc(%lu) RES_NAMES\n"
+		, RES_NAMES(_res)
+		, (unsigned long)(sizeof(db_key_t) * _nc));
+#endif
+
+	if (!RES_NAMES(_res)) 
+	{
+                LOG(L_ERR, "sc_get_columns: Failed to allocate %lu bytes for column names\n"
+			, (unsigned long)(sizeof(db_key_t) * _nc));
+		
+                return -3;
+        }
+
+	/* Allocate storage to hold the type of each column */
+        RES_TYPES(_res) = (db_type_t*)pkg_malloc(sizeof(db_type_t) * _nc);
+
+#ifdef SC_EXTRA_DEBUG
+	LOG(L_DBG, "sc_get_columns: %p=pkg_malloc(%lu) RES_TYPES\n"
+		, RES_TYPES(_res)
+		, (unsigned long)(sizeof(db_type_t) * _nc));
+#endif
+
+        if (!RES_TYPES(_res)) 
+	{
+                LOG(L_ERR, "sc_get_columns: Failed to allocate %lu bytes for column types\n"
+			, (unsigned long)(sizeof(db_type_t) * _nc));
+		
+		/* Free previously allocated storage that was to hold column names */
+		LOG(L_DBG, "sc_get_columns: %p=pkg_free() RES_NAMES\n", RES_NAMES(_res));
+		pkg_free(RES_NAMES(_res));
+                return -4;
+        }
+
+	/* Save number of columns in the result structure */
+        RES_COL_N(_res) = _nc;
+
+	/* 
+	 * For each column both the name and the data type are saved.
+	 */
+	for(col = 0; col < _nc; col++) 
+	{
+		column_p cp = NULL;
+		cp = (_lres) ? _tp->colp[_lres[col]] : _tp->colp[col];
+		len = cp->name.len;
+		RES_NAMES(_res)[col] = pkg_malloc(len+1);
+		
+#ifdef SC_EXTRA_DEBUG
+		LOG(L_DBG, "sc_get_columns: %p=pkg_malloc(%d) RES_NAMES[%d]\n"
+			, RES_NAMES(_res)[col], len+1, col);
+#endif
+
+		if (! RES_NAMES(_res)[col]) 
+		{
+			LOG(L_ERR, "sc_get_columns: Failed to allocate %d bytes to hold column name\n", len+1);
+			return -1;
+		}
+		
+		memset((char *)RES_NAMES(_res)[col], 0, len+1);
+		strncpy((char *)RES_NAMES(_res)[col], cp->name.s, len); 
+
+		LOG(L_DBG, "sc_get_columns: RES_NAMES(%p)[%d]=[%s]\n"
+			, RES_NAMES(_res)[col]
+			, col
+			, RES_NAMES(_res)[col]);
+
+		RES_TYPES(_res)[col] = cp->type;
+	}
+	return 0;
+}
+
+
+
+/**
+ * Convert rows from Berkeley DB to db API representation
+ */
+int sc_convert_row(db_res_t* _res, char *bdb_result, int* _lres)
+{
+        int col, len, i, j;
+	char **row_buf, *s;
+	db_row_t* row = NULL;
+	col = len = i = j = 0;
+	
+        if (!_res)  
+	{	LOG(L_ERR, "sc_convert_row: db_res_t parameter cannot be NULL\n");
+                return -1;
+        }
+
+	/* Allocate a single row structure */
+	len = sizeof(db_row_t); 
+	row = (db_row_t*)pkg_malloc(len);
+        if (!row) 
+	{	LOG(L_ERR, "sc_convert_row: Failed to allocate %d bytes for row structure\n", len);
+                return -1;
+        }
+	memset(row, 0, len);
+	RES_ROWS(_res) = row;
+	
+	/* Save the number of rows in the current fetch */
+	RES_ROW_N(_res) = 1;
+
+	/* Allocate storage to hold the bdb result values */
+	len = sizeof(db_val_t) * RES_COL_N(_res);
+	ROW_VALUES(row) = (db_val_t*)pkg_malloc(len);
+        LOG(L_DBG, "sc_convert_row: %p=pkg_malloc(%d) ROW_VALUES for %d columns\n"
+		 , ROW_VALUES(row)
+		 , len
+		 , RES_COL_N(_res));
+
+        if (!ROW_VALUES(row)) 
+	{	LOG(L_ERR, "sc_convert_row: No memory left\n");
+                return -1;
+        }
+	memset(ROW_VALUES(row), 0, len);
+
+	/* Save the number of columns in the ROW structure */
+        ROW_N(row) = RES_COL_N(_res);
+
+	/*
+	 * Allocate an array of pointers one per column.
+	 * It that will be used to hold the address of the string representation of each column.
+	 */
+	len = sizeof(char *) * RES_COL_N(_res);
+	row_buf = (char **)pkg_malloc(len);
+        if (!row_buf) 
+	{	LOG(L_ERR, "[sc_convert_row]: Failed to allocate %d bytes for row buffer\n", len);
+		return -1;
+        }
+	memset(row_buf, 0, len);
+
+	/*populate the row_buf with bdb_result*/
+	/*bdb_result is memory from our callers stack so we copy here*/
+	s = strtok(bdb_result, DELIM);
+	while( s!=NULL)
+	{
+
+		if(_lres)
+		{	
+			/*only requested cols (_c was specified)*/
+			for(i=0; i<ROW_N(row); i++)
+			{	if (col == _lres[i])
+				{
+					len = strlen(s);
+					row_buf[i] = pkg_malloc(len+1);
+					if (!row_buf[i])
+					{
+						LOG(L_ERR, "[sc_convert_row]: Failed to allocate %d bytes for row_buf[%d]\n", len+1, col);
+						return -1;
+					}
+					memset(row_buf[i], 0, len+1);
+					strncpy(row_buf[i], s, len);
+				}
+				
+			}
+		}
+		else 
+		{
+			len = strlen(s);
+			row_buf[col] = pkg_malloc(len+1);
+			if (!row_buf[col]) {
+				LOG(L_ERR, "[sc_convert_row]: Failed to allocate %d bytes for row_buf[%d]\n", len+1, col);
+				return -1;
+			}
+			memset(row_buf[col], 0, len+1);
+			strncpy(row_buf[col], s, len);
+		}
+
+		s = strtok(NULL, DELIM);
+		col++;
+	}
+
+	/*do the type conversion per col*/
+        for(col = 0; col < ROW_N(row); col++) 
+	{
+		/*skip the unrequested cols (as already specified)*/
+		if(!row_buf[col])  continue;
+
+		LOG(L_DBG, "sc_convert_row: col[%d]\n", col);
+		/* Convert the string representation into the value representation */
+		if (sc_str2val(	RES_TYPES(_res)[col]
+				, &(ROW_VALUES(row)[col])
+				, row_buf[col]
+				, strlen(row_buf[col])) < 0) 
+		{
+                        LOG(L_ERR, "sc_convert_row: Error while converting value\n");
+        		 LOG(L_DBG, "sc_convert_row: %p=pkg_free() _row\n", row);
+                        sc_free_row(row);
+                        return -3;
+                }
+        }
+
+	/* pkg_free() must be done for the above allocations now that the row has been converted.
+	 * During sc_convert_row (and subsequent sc_str2val) processing, data types that don't need to be
+	 * converted (namely STRINGS) have their addresses saved.  These data types should not have
+	 * their pkg_malloc() allocations freed here because they are still needed.  However, some data types
+	 * (ex: INT, DOUBLE) should have their pkg_malloc() allocations freed because during the conversion
+	 * process, their converted values are saved in the union portion of the db_val_t structure.
+	 *
+	 * Warning: when the converted row is no longer needed, the data types whose addresses
+	 * were saved in the db_val_t structure must be freed or a memory leak will happen.
+	 * This processing should happen in the sc_free_row() subroutine.  The caller of
+	 * this routine should ensure that sc_free_rows(), sc_free_row() or sc_free_result()
+	 * is eventually called.
+	 */
+	for (col=0; col<RES_COL_N(_res); col++) 
+	{
+		switch (RES_TYPES(_res)[col]) 
+		{
+			case DB_STRING:
+			case DB_STR:
+				break;
+			default:
+#ifdef SC_EXTRA_DEBUG
+			LOG(L_DBG, "sc_convert_row: col[%d] Col[%s] Type[%d] Freeing row_buf[%p]\n"
+				, col
+				, RES_NAMES(_res)[col], RES_TYPES(_res)[col]
+				, (char*) row_buf[col]);
+			
+			LOG(L_DBG, "sc_convert_row: %p=pkg_free() row_buf[%d]\n", (char *)row_buf[col], col);
+#endif
+
+			pkg_free((char *)row_buf[col]);
+		}
+		/* The following housekeeping may not be technically required, but it is a good practice
+		 * to NULL pointer fields that are no longer valid.  Note that DB_STRING fields have not
+		 * been pkg_free(). NULLing DB_STRING fields would normally not be good to do because a memory
+		 * leak would occur.  However, the pg_convert_row() routine has saved the DB_STRING pointer
+		 * in the db_val_t structure.  The db_val_t structure will eventually be used to pkg_free()
+		 * the DB_STRING storage.
+		 */
+		row_buf[col] = (char *)NULL;
+	}
+
+	LOG(L_DBG, "sc_convert_row: %p=pkg_free() row_buf\n", row_buf);
+	pkg_free(row_buf);
+	row_buf = NULL;
+
+        return 0;
+
+}
+
+/*rx is row index*/
+int sc_append_row(db_res_t* _res, char *bdb_result, int* _lres, int _rx)
+{
+        int col, len, i, j;
+	char **row_buf, *s;
+	db_row_t* row = NULL;
+	col = len = i = j = 0;
+	
+        if (!_res)  
+	{	LOG(L_ERR, "sc_append_row: db_res_t parameter cannot be NULL\n");
+                return -1;
+        }
+	
+	row = &(RES_ROWS(_res)[_rx]);
+	
+	/* Allocate storage to hold the bdb result values */
+	len = sizeof(db_val_t) * RES_COL_N(_res);
+	ROW_VALUES(row) = (db_val_t*)pkg_malloc(len);
+	
+        if (!ROW_VALUES(row)) 
+	{	LOG(L_ERR, "sc_append_row: No memory left\n");
+                return -1;
+        }
+	
+	memset(ROW_VALUES(row), 0, len);
+	
+	/* Save the number of columns in the ROW structure */
+        ROW_N(row) = RES_COL_N(_res);
+	
+	/*
+	 * Allocate an array of pointers one per column.
+	 * It that will be used to hold the address of the string representation of each column.
+	 */
+	len = sizeof(char *) * RES_COL_N(_res);
+	row_buf = (char **)pkg_malloc(len);
+        if (!row_buf) 
+	{	LOG(L_ERR, "[sc_append_row]: Failed to allocate %d bytes for row buffer\n", len);
+		return -1;
+        }
+	memset(row_buf, 0, len);
+	
+	/*populate the row_buf with bdb_result*/
+	/*bdb_result is memory from our callers stack so we copy here*/
+	s = strtok(bdb_result, DELIM);
+	while( s!=NULL)
+	{
+		
+		if(_lres)
+		{	
+			/*only requested cols (_c was specified)*/
+			for(i=0; i<ROW_N(row); i++)
+			{	if (col == _lres[i])
+				{
+					len = strlen(s);
+					row_buf[i] = pkg_malloc(len+1);
+					if (!row_buf[i])
+					{
+						LOG(L_ERR, "[sc_append_row]: Failed to allocate %d bytes for row_buf[%d]\n", len+1, col);
+						return -1;
+					}
+					memset(row_buf[i], 0, len+1);
+					strncpy(row_buf[i], s, len);
+				}
+				
+			}
+		}
+		else 
+		{
+			len = strlen(s);
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("     [sc_append_row] : col[%i] = [%.*s]\n", col , len, s );
+#endif
+
+			row_buf[col] = (char*)pkg_malloc(len+1);
+			if (!row_buf[col]) 
+			{
+				LOG(L_ERR, "[sc_append_row]: Failed to allocate %d bytes for row_buf[%d]\n", len+1, col);
+				return -1;
+			}
+			memset(row_buf[col], 0, len+1);
+			strncpy(row_buf[col], s, len);
+		}
+		
+		s = strtok(NULL, DELIM);
+		col++;
+	}
+	
+	/*do the type conversion per col*/
+        for(col = 0; col < ROW_N(row); col++) 
+	{
+#ifdef SC_EXTRA_DEBUG
+		DBG("     [sc_append_row] tc 1: col[%i] == ", col );
+#endif
+
+		/*skip the unrequested cols (as already specified)*/
+		if(!row_buf[col])  continue;
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("     tc 2: col[%i] \n", col );
+#endif
+
+		/* Convert the string representation into the value representation */
+		if (sc_str2val(	RES_TYPES(_res)[col]
+				, &(ROW_VALUES(row)[col])
+				, row_buf[col]
+				, strlen(row_buf[col])) < 0) 
+		{
+                        LOG(L_ERR, "sc_append_row: Error while converting value\n");
+        		 LOG(L_DBG, "sc_append_row: %p=pkg_free() _row\n", row);
+                        sc_free_row(row);
+                        return -3;
+                }
+		
+		LOG(L_DBG, "sc_append_row: col[%d] : %s\n", col, row_buf[col] );
+        }
+
+	/* pkg_free() must be done for the above allocations now that the row has been converted.
+	 * During sc_convert_row (and subsequent sc_str2val) processing, data types that don't need to be
+	 * converted (namely STRINGS) have their addresses saved.  These data types should not have
+	 * their pkg_malloc() allocations freed here because they are still needed.  However, some data types
+	 * (ex: INT, DOUBLE) should have their pkg_malloc() allocations freed because during the conversion
+	 * process, their converted values are saved in the union portion of the db_val_t structure.
+	 *
+	 * Warning: when the converted row is no longer needed, the data types whose addresses
+	 * were saved in the db_val_t structure must be freed or a memory leak will happen.
+	 * This processing should happen in the sc_free_row() subroutine.  The caller of
+	 * this routine should ensure that sc_free_rows(), sc_free_row() or sc_free_result()
+	 * is eventually called.
+	 */
+	for (col=0; col<RES_COL_N(_res); col++) 
+	{
+		if (RES_TYPES(_res)[col] != DB_STRING) 
+		{
+#ifdef SC_EXTRA_DEBUG
+			LOG(L_DBG, "sc_append_row: [%d][%d] Col[%s] Type[%d] Freeing row_buf[%i]\n"
+				, _rx, col, RES_NAMES(_res)[col], RES_TYPES(_res)[col], col);
+#endif
+			pkg_free((char *)row_buf[col]);
+		}
+		/* The following housekeeping may not be technically required, but it is a good practice
+		 * to NULL pointer fields that are no longer valid.  Note that DB_STRING fields have not
+		 * been pkg_free(). NULLing DB_STRING fields would normally not be good to do because a memory
+		 * leak would occur.  However, the pg_convert_row() routine has saved the DB_STRING pointer
+		 * in the db_val_t structure.  The db_val_t structure will eventually be used to pkg_free()
+		 * the DB_STRING storage.
+		 */
+		row_buf[col] = (char *)NULL;
+	}
+
+	LOG(L_DBG, "sc_append_row: %p=pkg_free() row_buf\n", row_buf);
+	pkg_free(row_buf);
+	row_buf = NULL;
+        return 0;
+
+}
+
+
+
+int* sc_get_colmap(table_p _dtp, db_key_t* _k, int _n)
+{
+	int i, j, *_lref=NULL;
+	
+	if(!_dtp || !_k || _n < 0)
+		return NULL;
+	
+	_lref = (int*)pkg_malloc(_n*sizeof(int));
+	if(!_lref)
+		return NULL;
+	
+	for(i=0; i < _n; i++)
+	{
+		for(j=0; j<_dtp->ncols; j++)
+		{
+			if(strlen(_k[i])==_dtp->colp[j]->name.len
+			&& !strncasecmp(_k[i], _dtp->colp[j]->name.s,
+						_dtp->colp[j]->name.len))
+			{
+				_lref[i] = j;
+				break;
+			}
+		}
+		
+		if(i>=_dtp->ncols)
+		{
+			DBG("sc_get_colmap: ERROR column <%s> not found\n", _k[i]);
+			pkg_free(_lref);
+			return NULL;
+		}
+		
+	}
+	return _lref;
+}
+
+
+db_res_t* sc_result_new(void)
+{
+	db_res_t* _res = NULL;
+	_res = (db_res_t*)pkg_malloc(sizeof(db_res_t));
+        if (!_res) 
+	{
+                LOG(L_ERR, "sc_get_result: Failed to allocate %lu bytes for result structure\n"
+			, (unsigned long)sizeof(db_res_t));
+                return NULL;
+        }
+	
+	memset(_res, 0, sizeof(db_res_t));
+	return _res;
+}
+
+
+int sc_free_result(db_res_t* _res)
+{
+	sc_free_columns(_res);
+	sc_free_rows(_res);
+        LOG(L_DBG, "sc_result_free: %p=pkg_free() _res\n", _res);
+        pkg_free(_res);
+	_res = NULL;
+
+	return 0;
+}
+
+/**
+ * Release memory used by rows
+ */
+int sc_free_rows(db_res_t* _res)
+{
+	int row;
+
+	LOG(L_DBG, "sc_free_rows: Freeing %d rows\n", RES_ROW_N(_res));
+
+	for(row = 0; row < RES_ROW_N(_res); row++) 
+	{
+		LOG(L_DBG, "sc_free_rows: Row[%d]=%p\n", row, &(RES_ROWS(_res)[row]));
+		sc_free_row(&(RES_ROWS(_res)[row]));
+	}
+
+	RES_ROW_N(_res) = 0;
+
+        if (RES_ROWS(_res)) 
+	{
+                LOG(L_DBG, "sc_free_rows: %p=pkg_free() RES_ROWS\n", RES_ROWS(_res));
+		pkg_free(RES_ROWS(_res));
+		RES_ROWS(_res) = NULL;
+	}
+
+        return 0;
+}
+
+int sc_free_row(db_row_t* _row)
+{
+	int	col;
+	db_val_t* _val;
+
+	/* 
+	 * Loop thru each columm, then check to determine if the storage 
+	 * pointed to by db_val_t structure must be freed.
+	 * This is required for DB_STRING.  If this is not done, 
+	 * a memory leak will happen.
+	 * DB_STR types also fall in this category, however, they are 
+	 * currently not being converted (or checked below).
+	 */
+	for (col = 0; col < ROW_N(_row); col++) 
+	{
+		_val = &(ROW_VALUES(_row)[col]);
+		switch (VAL_TYPE(_val)) 
+		{
+		case DB_STRING:
+			LOG(L_DBG, "[sc_free_row]: %p=pkg_free() VAL_STRING[%d]\n", (char *)VAL_STRING(_val), col);
+			pkg_free((char *)(VAL_STRING(_val)));
+			VAL_STRING(_val) = (char *)NULL;
+			break;
+
+		case DB_STR:
+			LOG(L_DBG, "[sc_free_row]: %p=pkg_free() VAL_STR[%d]\n", (char *)(VAL_STR(_val).s), col);
+			pkg_free((char *)(VAL_STR(_val).s));
+			VAL_STR(_val).s = (char *)NULL;
+			break;
+		default:
+			break;
+		}
+	}
+
+	/* Free db_val_t structure. */
+        if (ROW_VALUES(_row)) 
+	{
+                LOG(L_DBG, "sc_free_row: %p=pkg_free() ROW_VALUES\n"
+			, ROW_VALUES(_row));
+		
+                pkg_free(ROW_VALUES(_row));
+		ROW_VALUES(_row) = NULL;
+	}
+        return 0;
+}
+
+/**
+ * Release memory used by columns
+ */
+int sc_free_columns(db_res_t* _res)
+{
+	int col;
+
+	/* Free memory previously allocated to save column names */
+        for(col = 0; col < RES_COL_N(_res); col++) 
+	{
+#ifdef SC_EXTRA_DEBUG
+                LOG(L_DBG, "sc_free_columns: Freeing RES_NAMES(%p)[%d] -> free(%p) '%s'\n"
+			, _res
+			, col
+			, RES_NAMES(_res)[col]
+			, RES_NAMES(_res)[col]);
+
+                LOG(L_DBG, "sc_free_columns: %p=pkg_free() RES_NAMES[%d]\n"
+			, RES_NAMES(_res)[col]
+			, col);
+#endif
+		
+                pkg_free((char *)RES_NAMES(_res)[col]);
+		RES_NAMES(_res)[col] = (char *)NULL;
+	}
+	
+        if (RES_NAMES(_res)) 
+	{
+                LOG(L_DBG, "sc_free_columns: %p=pkg_free() RES_NAMES\n"
+			, RES_NAMES(_res));
+		
+		pkg_free(RES_NAMES(_res));
+		RES_NAMES(_res) = NULL;
+	}
+	
+        if (RES_TYPES(_res)) 
+	{
+                LOG(L_DBG, "sc_free_columns: %p=pkg_free() RES_TYPES\n"
+			, RES_TYPES(_res));
+		
+		pkg_free(RES_TYPES(_res));
+		RES_TYPES(_res) = NULL;
+	}
+
+	return 0;
+}
+
+int sc_is_neq_type(db_type_t _t0, db_type_t _t1)
+{
+	if(_t0 == _t1)	return 0;
+	
+	switch(_t1)
+	{
+		case DB_INT:
+			if(_t0==DB_DATETIME || _t0==DB_BITMAP)
+				return 0;
+		case DB_DATETIME:
+			if(_t0==DB_INT)
+				return 0;
+			if(_t0==DB_BITMAP)
+				return 0;
+		case DB_DOUBLE:
+			break;
+		case DB_STRING:
+			if(_t0==DB_STR)
+				return 0;
+		case DB_STR:
+			if(_t0==DB_STRING || _t0==DB_BLOB)
+				return 0;
+		case DB_BLOB:
+			if(_t0==DB_STR)
+				return 0;
+		case DB_BITMAP:
+			if (_t0==DB_INT)
+				return 0;
+	}
+	return 1;
+}
+
+
+/*
+*/
+int sc_row_match(db_key_t* _k, db_op_t* _op, db_val_t* _v, int _n, db_res_t* _r, int* _lkey )
+{
+	int i, res;
+	db_row_t* row = NULL;
+	
+	if(!_r || !_lkey)
+		return 1;
+	
+	row = RES_ROWS(_r);
+	
+	for(i=0; i<_n; i++)
+	{
+		res = sc_cmp_val(&(ROW_VALUES(row)[_lkey[i]]), &_v[i]);
+
+		if(!_op || !strcmp(_op[i], OP_EQ))
+		{
+			if(res!=0)
+				return 0;
+		}else{
+		if(!strcmp(_op[i], OP_LT))
+		{
+			if(res!=-1)
+				return 0;
+		}else{
+		if(!strcmp(_op[i], OP_GT))
+		{
+			if(res!=1)
+				return 0;
+		}else{
+		if(!strcmp(_op[i], OP_LEQ))
+		{
+			if(res==1)
+				return 0;
+		}else{
+		if(!strcmp(_op[i], OP_GEQ))
+		{
+			if(res==-1)
+				return 0;
+		}else{
+			return res;
+		}}}}}
+	}
+	
+	return 1;
+}
+
+/*
+*/
+int sc_cmp_val(db_val_t* _vp, db_val_t* _v)
+{
+	int _l, _n;
+	
+	if(!_vp && !_v)
+		return 0;
+	if(!_v)
+		return 1;
+	if(!_vp)
+		return -1;
+	if(_vp->nul && _v->nul)
+		return 0;
+	if(_v->nul)
+		return 1;
+	if(_vp->nul)
+		return -1;
+	
+	switch(VAL_TYPE(_v))
+	{
+		case DB_INT:
+			return (_vp->val.int_val<_v->val.int_val)?-1:
+					(_vp->val.int_val>_v->val.int_val)?1:0;
+		case DB_DOUBLE:
+			return (_vp->val.double_val<_v->val.double_val)?-1:
+					(_vp->val.double_val>_v->val.double_val)?1:0;
+		case DB_DATETIME:
+			return (_vp->val.int_val<_v->val.time_val)?-1:
+					(_vp->val.int_val>_v->val.time_val)?1:0;
+		case DB_STRING:
+			_l = strlen(_v->val.string_val);
+			_l = (_l>_vp->val.str_val.len)?_vp->val.str_val.len:_l;
+			_n = strncasecmp(_vp->val.str_val.s, _v->val.string_val, _l);
+			if(_n)
+				return _n;
+			if(_vp->val.str_val.len == strlen(_v->val.string_val))
+				return 0;
+			if(_l==_vp->val.str_val.len)
+				return -1;
+			return 1;
+		case DB_STR:
+			_l = _v->val.str_val.len;
+			_l = (_l>_vp->val.str_val.len)?_vp->val.str_val.len:_l;
+			_n = strncasecmp(_vp->val.str_val.s, _v->val.str_val.s, _l);
+			if(_n)
+				return _n;
+			if(_vp->val.str_val.len == _v->val.str_val.len)
+				return 0;
+			if(_l==_vp->val.str_val.len)
+				return -1;
+			return 1;
+		case DB_BLOB:
+			_l = _v->val.blob_val.len;
+			_l = (_l>_vp->val.str_val.len)?_vp->val.str_val.len:_l;
+			_n = strncasecmp(_vp->val.str_val.s, _v->val.blob_val.s, _l);
+			if(_n)
+				return _n;
+			if(_vp->val.str_val.len == _v->val.blob_val.len)
+				return 0;
+			if(_l==_vp->val.str_val.len)
+				return -1;
+			return 1;
+		case DB_BITMAP:
+			return (_vp->val.int_val<_v->val.bitmap_val)?-1:
+				(_vp->val.int_val>_v->val.bitmap_val)?1:0;
+	}
+	return -2;
+}

+ 67 - 0
modules/db_berkeley/bdb_res.h

@@ -0,0 +1,67 @@
+/*
+ * $Id$
+ *
+ * sleepycat module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+
+#ifndef _BDB_RES_H_
+#define _BDB_RES_H_
+
+#include "../../db/db_op.h"
+#include "../../db/db_res.h"
+#include "../../db/db_con.h"
+#include "bdb_lib.h"
+#include "bdb_val.h"
+
+typedef struct _con
+{
+	database_p con;
+	db_res_t*  res;
+	row_p row;
+} sc_con_t, *sc_con_p;
+
+#define SC_CON_CONNECTION(db_con) (((sc_con_p)((db_con)->tail))->con)
+#define SC_CON_RESULT(db_con)     (((sc_con_p)((db_con)->tail))->res)
+#define SC_CON_ROW(db_con)        (((sc_con_p)((db_con)->tail))->row)
+
+int sc_get_columns(table_p _tp, db_res_t* _res, int* _lres, int _nc);
+int sc_convert_row( db_res_t* _res, char *bdb_result, int* _lres);
+int sc_append_row(db_res_t* _res, char *bdb_result, int* _lres, int _rx);
+int* sc_get_colmap(table_p _tp, db_key_t* _k, int _n);
+
+db_res_t*  sc_result_new(void);
+int sc_free_result(db_res_t* _res);
+int sc_free_columns(db_res_t* _res);
+int sc_free_rows(db_res_t* _res);
+int sc_free_row(db_row_t* _row);
+
+int sc_is_neq_type(db_type_t _t0, db_type_t _t1);
+int sc_row_match(db_key_t* _k, db_op_t* _op, db_val_t* _v, int _n, db_res_t* _r, int* lkey );
+int sc_cmp_val(db_val_t* _vp, db_val_t* _v);
+
+#endif
+

+ 55 - 0
modules/db_berkeley/bdb_util.c

@@ -0,0 +1,55 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+#include <string.h>
+#include <sys/types.h>
+#include <dirent.h>
+
+#include "bdb_util.h"
+
+/**
+ *
+ */
+int sc_is_database(str *_s)
+{
+	DIR *dirp = NULL;
+	char buf[512];
+	
+	if(!_s || !_s->s || _s->len <= 0 || _s->len > 510)
+		return 0;
+	strncpy(buf, _s->s, _s->len);
+	buf[_s->len] = 0;
+	dirp = opendir(buf);
+	if(!dirp)
+		return 0;
+	closedir(dirp);
+
+	return 1;
+}
+

+ 39 - 0
modules/db_berkeley/bdb_util.h

@@ -0,0 +1,39 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+
+#ifndef _BDB_UTIL_H_
+#define _BDB_UTIL_H_
+
+#include "../../str.h"
+
+int sc_is_database(str *);
+
+#endif
+

+ 239 - 0
modules/db_berkeley/bdb_val.c

@@ -0,0 +1,239 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+ 
+
+#include "../../db/db_val.h"
+#include "../../db/db_ut.h"
+#include "db_berkeley.h"
+#include "bdb_res.h"
+#include "bdb_val.h"
+#include <string.h>
+
+/**
+ * Does not copy strings
+ */
+int sc_str2val(db_type_t _t, db_val_t* _v, char* _s, int _l)
+{
+
+	static str dummy_string = {"", 0};
+
+	if(!_s)
+	{
+		memset(_v, 0, sizeof(db_val_t));
+		/* Initialize the string pointers to a dummy empty
+		 * string so that we do not crash when the NULL flag
+		 * is set but the module does not check it properly
+		 */
+		VAL_STRING(_v) = dummy_string.s;
+		VAL_STR(_v) = dummy_string;
+		VAL_BLOB(_v) = dummy_string;
+		VAL_TYPE(_v) = _t;
+		VAL_NULL(_v) = 1;
+		return 0;
+	}
+	VAL_NULL(_v) = 0;
+
+	switch(_t) {
+	case DB_INT:
+		if (db_str2int(_s, &VAL_INT(_v)) < 0) {
+			LOG(L_ERR, "berkeley_db[str2val]: Error while converting INT value from string\n");
+			return -2;
+		} else {
+			VAL_TYPE(_v) = DB_INT;
+			return 0;
+		}
+		break;
+
+	case DB_BITMAP:
+		if (db_str2int(_s, &VAL_INT(_v)) < 0) {
+			LOG(L_ERR, "berkeley_db[str2val]: Error while converting BITMAP value from string\n");
+			return -3;
+		} else {
+			VAL_TYPE(_v) = DB_BITMAP;
+			return 0;
+		}
+		break;
+
+	case DB_DOUBLE:
+		if (db_str2double(_s, &VAL_DOUBLE(_v)) < 0) {
+			LOG(L_ERR, "berkeley_db[str2val]: Error while converting DOUBLE value from string\n");
+			return -4;
+		} else {
+			VAL_TYPE(_v) = DB_DOUBLE;
+			return 0;
+		}
+		break;
+
+	case DB_STRING:
+		VAL_STRING(_v) = _s;
+		VAL_TYPE(_v) = DB_STRING;
+		
+		if( strlen(_s)==4 && !strncasecmp(_s, "NULL", 4) )
+			VAL_NULL(_v) = 1;
+		
+		return 0;
+
+	case DB_STR:
+		VAL_STR(_v).s = (char*)_s;
+		VAL_STR(_v).len = _l;
+		VAL_TYPE(_v) = DB_STR;
+
+		if( strlen(_s)==4 && !strncasecmp(_s, "NULL", 4) )
+			VAL_NULL(_v) = 1;
+
+		return 0;
+
+	case DB_DATETIME:
+		if (db_str2time(_s, &VAL_TIME(_v)) < 0) {
+			LOG(L_ERR, "berkeley_db[str2val]: Error converting datetime\n");
+			return -5;
+		} else {
+			VAL_TYPE(_v) = DB_DATETIME;
+			return 0;
+		}
+		break;
+
+	case DB_BLOB:
+		VAL_BLOB(_v).s = _s;
+		VAL_TYPE(_v) = DB_BLOB;
+		LOG(L_DBG, "berkeley_db[str2val]: got blob len %d\n", _l);
+		return 0;
+	}
+
+	return -6;
+}
+
+
+/*
+ * Used when converting result from a query
+ */
+int sc_val2str(db_val_t* _v, char* _s, int* _len)
+{
+	int l;
+
+	if (VAL_NULL(_v)) 
+	{
+		*_len = snprintf(_s, *_len, "NULL");
+		return 0;
+	}
+	
+	switch(VAL_TYPE(_v)) {
+	case DB_INT:
+		if (db_int2str(VAL_INT(_v), _s, _len) < 0) {
+			LOG(L_ERR, "berkeley_db[val2str]: Error while converting int to string\n");
+			return -2;
+		} else {
+			LOG(L_DBG, "berkeley_db[val2str]: Converted int to string\n");
+			return 0;
+		}
+		break;
+
+	case DB_BITMAP:
+		if (db_int2str(VAL_INT(_v), _s, _len) < 0) {
+			LOG(L_ERR, "berkeley_db[val2str]: Error while converting bitmap to string\n");
+			return -3;
+		} else {
+			LOG(L_DBG, "berkeley_db[val2str]: Converted bitmap to string\n");
+			return 0;
+		}
+		break;
+
+	case DB_DOUBLE:
+		if (db_double2str(VAL_DOUBLE(_v), _s, _len) < 0) {
+			LOG(L_ERR, "berkeley_db[val2str]: Error while converting double  to string\n");
+			return -3;
+		} else {
+			LOG(L_DBG, "berkeley_db[val2str]: Converted double to string\n");
+			return 0;
+		}
+		break;
+
+	case DB_STRING:
+		l = strlen(VAL_STRING(_v));
+		if (*_len < l ) 
+		{	LOG(L_ERR, "berkeley_db[val2str]: Destination buffer too short for string\n");
+			return -4;
+		} 
+		else 
+		{	LOG(L_DBG, "berkeley_db[val2str]: Converted string to string\n");
+			strncpy(_s, VAL_STRING(_v) , l);
+			_s[l] = 0;
+			*_len = l;
+			return 0;
+		}
+		break;
+
+	case DB_STR:
+		l = VAL_STR(_v).len;
+		if (*_len < l) 
+		{
+			LOG(L_ERR, "berkeley_db[val2str]: Destination buffer too short for str\n");
+			return -5;
+		} 
+		else 
+		{
+			LOG(L_DBG, "berkeley_db[val2str]: Converted str to string\n");
+			strncpy(_s, VAL_STR(_v).s , VAL_STR(_v).len);
+			*_len = VAL_STR(_v).len;
+			return 0;
+		}
+		break;
+
+	case DB_DATETIME:
+		if (db_time2str(VAL_TIME(_v), _s, _len) < 0) {
+			LOG(L_ERR, "berkeley_db[val2str]: Error while converting time_t to string\n");
+			return -6;
+		} else {
+			LOG(L_DBG, "berkeley_db[val2str]: Converted time_t to string\n");
+			return 0;
+		}
+		break;
+
+	case DB_BLOB:
+		l = VAL_BLOB(_v).len;
+		if (*_len < l) 
+		{
+			LOG(L_ERR, "berkeley_db[val2str]: Destination buffer too short for blob\n");
+			return -7;
+		} 
+		else 
+		{
+			LOG(L_DBG, "berkeley_db[str2val]: Converting BLOB [%s]\n", _s);
+			_s = VAL_BLOB(_v).s;
+			*_len = 0;
+			return -8;
+		}
+		break;
+
+	default:
+		LOG(L_DBG, "berkeley_db[val2str]: Unknown data type\n");
+		return -8;
+	}
+	return -9;
+}

+ 42 - 0
modules/db_berkeley/bdb_val.h

@@ -0,0 +1,42 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+
+#ifndef _BDB_VAL_H_
+#define _BDB_VAL_H_
+
+#include "../../db/db_op.h"
+#include "../../db/db_res.h"
+#include "../../db/db_con.h"
+
+int sc_val2str(db_val_t* _v, char* _s, int* _len);
+int sc_str2val(db_type_t _t, db_val_t* _v, char* _s, int _l);
+
+#endif
+

+ 1289 - 0
modules/db_berkeley/db_berkeley.c

@@ -0,0 +1,1289 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/stat.h>
+
+
+#include "../../str.h"
+#include "../../mem/mem.h"
+
+#include "../../sr_module.h"
+#include "db_berkeley.h"
+#include "bdb_lib.h"
+#include "bdb_res.h"
+
+#ifndef CFG_DIR
+#define CFG_DIR "/tmp"
+#endif
+
+#define SC_ID		"db_berkeley://"
+#define SC_ID_LEN	(sizeof(SC_ID)-1)
+#define SC_PATH_LEN	256
+
+#define SC_KEY   1
+#define SC_VALUE 0
+
+MODULE_VERSION
+
+int auto_reload = 0;
+int log_enable  = 0;
+int journal_roll_interval = 0;
+
+static int mod_init(void);
+static void destroy(void);
+
+
+/*
+ * Exported functions
+ */
+static cmd_export_t cmds[] = {
+	{"db_use_table",   (cmd_function)sc_use_table,  2, 0, 0},
+	{"db_init",        (cmd_function)sc_init,       1, 0, 0},
+	{"db_close",       (cmd_function)sc_close,      2, 0, 0},
+	{"db_query",       (cmd_function)sc_query,      2, 0, 0},
+	{"db_raw_query",   (cmd_function)sc_raw_query,  2, 0, 0},
+	{"db_free_result", (cmd_function)sc_free_query, 2, 0, 0},
+	{"db_insert",     (cmd_function)sc_insert,     2, 0, 0},
+	{"db_delete",     (cmd_function)sc_delete,     2, 0, 0},
+	{"db_update",     (cmd_function)sc_update,     2, 0, 0},
+	{0, 0, 0, 0, 0}
+};
+
+
+/*
+ * Exported parameters
+ */
+static param_export_t params[] = {
+	{"auto_reload",        INT_PARAM, &auto_reload },
+	{"log_enable",         INT_PARAM, &log_enable  },
+	{"journal_roll_interval", INT_PARAM, &journal_roll_interval  },
+	{0, 0, 0}
+};
+
+
+struct module_exports exports = {	
+	"db_berkeley",
+	DEFAULT_DLFLAGS, /* dlopen flags */
+	cmds,     /* Exported functions */
+	params,   /* Exported parameters */
+	0,        /* exported statistics */
+	0,        /* exported MI functions */
+	0,        /* exported pseudo-variables */
+	0,        /* extra processes */
+	mod_init, /* module initialization function */
+	0,        /* response function*/
+	destroy,  /* destroy function */
+	0         /* per-child init function */
+};
+
+
+static int mod_init(void)
+{
+	db_parms_t p;
+	
+	p.auto_reload = auto_reload;
+	p.log_enable = log_enable;
+	p.cache_size  = (4 * 1024 * 1024); //4Mb
+	p.journal_roll_interval = journal_roll_interval;
+	
+	if(sclib_init(&p))
+		return -1;
+
+	return 0;
+}
+
+static void destroy(void)
+{
+	sclib_destroy();
+}
+
+int sc_use_table(db_con_t* _h, const char* _t)
+{
+	if ((!_h) || (!_t))
+		return -1;
+	
+	CON_TABLE(_h) = _t;
+	return 0;
+}
+
+/*
+ * Initialize database connection
+ */
+db_con_t* sc_init(const char* _sqlurl)
+{
+	db_con_t* _res;
+	str _s;
+	char sc_path[SC_PATH_LEN];
+	
+	if (!_sqlurl) 
+		return NULL;
+	
+	_s.s = (char*)_sqlurl;
+	_s.len = strlen(_sqlurl);
+	if(_s.len <= SC_ID_LEN || strncmp(_s.s, SC_ID, SC_ID_LEN)!=0)
+	{
+		LOG(L_ERR, "sc_init: invalid database URL - should be:"
+			" <%s[/]path/to/directory>\n", SC_ID);
+		return NULL;
+	}
+	_s.s   += SC_ID_LEN;
+	_s.len -= SC_ID_LEN;
+	
+	if(_s.s[0]!='/')
+	{
+		if(sizeof(CFG_DIR)+_s.len+2 > SC_PATH_LEN)
+		{
+			LOG(L_ERR, "sc_init: path to database is too long\n");
+			return NULL;
+		}
+		strcpy(sc_path, CFG_DIR);
+		sc_path[sizeof(CFG_DIR)] = '/';
+		strncpy(&sc_path[sizeof(CFG_DIR)+1], _s.s, _s.len);
+		_s.len += sizeof(CFG_DIR);
+		_s.s = sc_path;
+	}
+	
+	_res = pkg_malloc(sizeof(db_con_t)+sizeof(sc_con_t));
+	if (!_res)
+	{
+		LOG(L_ERR, "sc_init: No memory left\n");
+		return NULL;
+	}
+	memset(_res, 0, sizeof(db_con_t) + sizeof(sc_con_t));
+	_res->tail = (unsigned long)((char*)_res+sizeof(db_con_t));
+	
+	SC_CON_CONNECTION(_res) = sclib_get_db(&_s);
+	if (!SC_CON_CONNECTION(_res))
+	{
+		LOG(L_ERR, "sc_init: cannot get the link to database\n");
+		return NULL;
+	}
+
+    return _res;
+}
+
+
+/*
+ * Close a database connection
+ */
+void sc_close(db_con_t* _h)
+{
+	if(SC_CON_RESULT(_h))
+		sc_free_result(SC_CON_RESULT(_h));
+	pkg_free(_h);
+}
+
+/* 
+ * n can be the dbenv path or a table name
+*/
+void sc_reload(char* _n)
+{
+	
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("------- RELOAD in %s\n", _n);
+	DBG("-------------------------------------------------\n");
+#endif
+
+	sclib_close(_n);
+	sclib_reopen(_n);
+}
+
+/*
+ * Attempts to reload a Berkeley database; reloads when the inode changes
+ */
+void sc_check_reload(db_con_t* _con)
+{
+	
+	str s;
+	char* p;
+	int rc, len;
+	struct stat st;
+	database_p db;
+	char n[MAX_ROW_SIZE];
+	char t[MAX_TABLENAME_SIZE];
+	table_p tp = NULL;
+	tbl_cache_p tbc = NULL;
+	
+	p=n;
+	rc = len = 0;
+	
+	/*get dbenv name*/
+	db = SC_CON_CONNECTION(_con);
+	if(!db->dbenv)	return;
+	s.s = db->name.s;
+	s.len = db->name.len;
+	len+=s.len;
+	
+	if(len > MAX_ROW_SIZE)
+	{	LOG(L_ERR, "sc_check_reload: dbenv name too long \n");
+		return;
+	}
+	
+	strncpy(p, s.s, s.len);
+	p+=s.len;
+	
+	len++;
+	if(len > MAX_ROW_SIZE)
+	{	LOG(L_ERR, "sc_check_reload: dbenv name too long \n");
+		return;
+	}
+	
+	/*append slash */
+	*p = '/';
+	p++;
+	
+	/*get table name*/
+	s.s = (char*)CON_TABLE(_con);
+	s.len = strlen(CON_TABLE(_con));
+	len+=s.len;
+	
+	if((len>MAX_ROW_SIZE) || (s.len > MAX_TABLENAME_SIZE) )
+	{	LOG(L_ERR, "sc_check_reload: table name too long \n");
+		return;
+	}
+
+	strncpy(t, s.s, s.len);
+	t[s.len] = 0;
+	
+	strncpy(p, s.s, s.len);
+	p+=s.len;
+	*p=0;
+	
+	if( (tbc = sclib_get_table(db, &s)) == NULL)
+		return;
+	
+	if( (tp = tbc->dtp) == NULL)
+		return;
+	
+	DBG("sc_check_reload: stat file [%.*s]\n", len, n);
+	rc = stat(n, &st);
+	if(!rc)
+	{	if((tp->ino!=0) && (st.st_ino != tp->ino))
+			sc_reload(t); /*file changed on disk*/
+		
+		tp->ino = st.st_ino;
+	}
+
+}
+
+
+/*
+ * Free all memory allocated by get_result
+ */
+int sc_free_query(db_con_t* _h, db_res_t* _r)
+{
+	if(_r)
+		sc_free_result(_r);
+	if(_h)
+		SC_CON_RESULT(_h) = NULL;
+	return 0;
+}
+
+
+/*
+ * Query table for specified rows
+ * _con: structure representing database connection
+ * _k: key names
+ * _op: operators
+ * _v: values of the keys that must match
+ * _c: column names to return
+ * _n: number of key=values pairs to compare
+ * _nc: number of columns to return
+ * _o: order by the specified column
+ */
+int sc_query(db_con_t* _con, db_key_t* _k, db_op_t* _op, db_val_t* _v, 
+			db_key_t* _c, int _n, int _nc, db_key_t _o, db_res_t** _r)
+{
+	tbl_cache_p _tbc = NULL;
+	table_p _tp = NULL;
+	char kbuf[MAX_ROW_SIZE];
+	char dbuf[MAX_ROW_SIZE];
+	u_int32_t i, len, ret; 
+	int klen=MAX_ROW_SIZE;
+	int *lkey=NULL, *lres=NULL;
+	str s;
+	DBT key, data;
+	DB *db;
+	DBC *dbcp;
+
+	if ((!_con) || (!_r) || !CON_TABLE(_con))
+	{
+#ifdef SC_EXTRA_DEBUG
+		LOG(L_ERR, "sc_query: Invalid parameter value\n");
+#endif
+		return -1;
+	}
+	*_r = NULL;
+	
+	/*check if underlying DB file has changed inode */
+	if(auto_reload)
+		sc_check_reload(_con);
+
+	s.s = (char*)CON_TABLE(_con);
+	s.len = strlen(CON_TABLE(_con));
+
+	_tbc = sclib_get_table(SC_CON_CONNECTION(_con), &s);
+	if(!_tbc)
+	{	DBG("sc_query: table does not exist!\n");
+		return -1;
+	}
+
+	_tp = _tbc->dtp;
+	if(!_tp)
+	{	DBG("sc_query: table not loaded!\n");
+		return -1;
+	}
+
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("------- QUERY in %.*s\n", _tp->name.len, _tp->name.s);
+	DBG("-------------------------------------------------\n");
+
+	if (_o)  DBG("sc_query: DONT-CARE : _o: order by the specified column \n");
+	if (_op) DBG("sc_query: DONT-CARE : _op: operators for refining query \n");
+#endif
+	
+	db = _tp->db;
+	if(!db) return -1;
+	
+	memset(&key, 0, sizeof(DBT));
+	memset(kbuf, 0, MAX_ROW_SIZE);
+	memset(&data, 0, sizeof(DBT));
+	memset(dbuf, 0, MAX_ROW_SIZE);
+	
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+
+	/* if _c is NULL and _nc is zero, you will get all table 
+	   columns in the result
+	*/
+	if (_c)
+	{	lres = sc_get_colmap(_tbc->dtp, _c, _nc);
+		if(!lres)
+		{	ret = -1;
+			goto error;
+		}
+	}
+	
+	if(_k)
+	{	lkey = sc_get_colmap(_tbc->dtp, _k, _n);
+		if(!lkey) 
+		{	ret = -1;
+			goto error;
+		}
+	}
+	else
+	{
+		DB_HASH_STAT st;
+		memset(&st, 0, sizeof(DB_HASH_STAT));
+		i =0 ;
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("------------------------------------------------------\n");
+		DBG("------- SELECT * FROM %.*s\n", _tp->name.len, _tp->name.s);
+		DBG("------------------------------------------------------\n");
+#endif
+
+		/* Acquire a cursor for the database. */
+		if ((ret = db->cursor(db, NULL, &dbcp, 0)) != 0) 
+		{	LOG(L_ERR, "sc_query: Error creating cursor\n");
+			goto error;
+		}
+		
+		/*count the number of records*/
+		while ((ret = dbcp->c_get(dbcp, &key, &data, DB_NEXT)) == 0)
+		{	if(!strncasecmp((char*)key.data,"METADATA",8)) 
+				continue;
+			i++;
+		}
+		
+		dbcp->c_close(dbcp);
+		ret=0;
+		
+#ifdef SC_EXTRA_DEBUG
+		DBG("--- %i = SELECT COUNT(*) FROM %.*s\n", i, _tp->name.len, _tp->name.s);
+#endif
+
+		*_r = sc_result_new();
+		if (!*_r) 
+		{	LOG(L_ERR, "sc_query: no memory left for result \n");
+			ret = -2;
+			goto error;
+		}
+		
+		if(i == 0)
+		{	
+			/*return empty table*/
+			RES_ROW_N(*_r) = 0;
+			SC_CON_RESULT(_con) = *_r;
+			return 0;
+		}
+		
+		/*allocate N rows in the result*/
+		RES_ROW_N(*_r) = i;
+		len  = sizeof(db_row_t) * i;
+		RES_ROWS(*_r) = (db_row_t*)pkg_malloc( len );
+		memset(RES_ROWS(*_r), 0, len);
+		
+		/*fill in the column part of db_res_t (metadata) */
+		if ((ret = sc_get_columns(_tbc->dtp, *_r, lres, _nc)) < 0) 
+		{	LOG(L_ERR, "sc_query: Error while getting column names\n");
+			goto error;
+		}
+		
+		/* Acquire a cursor for the database. */
+		if ((ret = db->cursor(db, NULL, &dbcp, 0)) != 0) 
+		{	LOG(L_ERR, "sc_query: Error creating cursor\n");
+			goto error;
+		}
+
+		/*convert each record into a row in the result*/
+		i =0 ;
+		while ((ret = dbcp->c_get(dbcp, &key, &data, DB_NEXT)) == 0)
+		{
+			if(!strncasecmp((char*)key.data,"METADATA",8)) 
+				continue;
+			
+#ifdef SC_EXTRA_DEBUG
+		DBG("     KEY:  [%.*s]\n     DATA: [%.*s]\n"
+			, (int)   key.size
+			, (char *)key.data
+			, (int)   data.size
+			, (char *)data.data);
+#endif
+
+			/*fill in the row part of db_res_t */
+			if ((ret=sc_append_row( *_r, dbuf, lres, i)) < 0) 
+			{	LOG(L_ERR, "sc_query: Error while converting row\n");
+				goto error;
+			}
+			i++;
+		}
+		
+		dbcp->c_close(dbcp);
+		SC_CON_RESULT(_con) = *_r;
+		return 0; 
+	}
+
+	if ( (ret = sclib_valtochar(_tp, lkey, kbuf, &klen, _v, _n, SC_KEY)) != 0 ) 
+	{	LOG(L_ERR, "sc_query: error in query key \n");
+		goto error;
+	}
+
+	key.data = kbuf;
+	key.ulen = MAX_ROW_SIZE;
+	key.flags = DB_DBT_USERMEM;
+	key.size = klen;
+
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+
+	/*create an empty db_res_t which gets returned even if no result*/
+	*_r = sc_result_new();
+	if (!*_r) 
+	{	LOG(L_ERR, "sc_convert_result: no memory left for result \n");
+		ret = -2;
+		goto error;
+	}
+	RES_ROW_N(*_r) = 0;
+	SC_CON_RESULT(_con) = *_r;
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("-------------------------------------------------\n");
+		DBG("SELECT  KEY: [%.*s]\n"
+			, (int)   key.size
+			, (char *)key.data );
+		DBG("-------------------------------------------------\n");
+#endif
+
+	/*query Berkely DB*/
+	if ((ret = db->get(db, NULL, &key, &data, 0)) == 0) 
+	{
+#ifdef SC_EXTRA_DEBUG
+		DBG("-------------------------------------------------\n");
+		DBG("-- RESULT\n     KEY:  [%.*s]\n     DATA: [%.*s]\n"
+			, (int)   key.size
+			, (char *)key.data
+			, (int)   data.size
+			, (char *)data.data);
+		DBG("-------------------------------------------------\n");
+#endif
+
+		/*fill in the col part of db_res_t */
+		if ((ret = sc_get_columns(_tbc->dtp, *_r, lres, _nc)) < 0) 
+		{	LOG(L_ERR, "sc_query: Error while getting column names\n");
+			goto error;
+		}
+		/*fill in the row part of db_res_t */
+		if ((ret=sc_convert_row( *_r, dbuf, lres)) < 0) 
+		{	LOG(L_ERR, "sc_query: Error while converting row\n");
+			goto error;
+		}
+		
+		if(lkey)
+			pkg_free(lkey);
+		if(lres)
+			pkg_free(lres);
+	}
+	else
+	{	
+		/*Berkeley DB error handler*/
+		switch(ret)
+		{
+		
+		case DB_NOTFOUND:
+		
+#ifdef SC_EXTRA_DEBUG
+			DBG("------------------------------\n");
+			DBG("-- NO RESULT for QUERY \n");
+			DBG("------------------------------\n");
+#endif
+		
+			ret=0;
+			break;
+		/*The following are all critical/fatal */
+		case DB_LOCK_DEADLOCK:	
+		// The operation was selected to resolve a deadlock. 
+		case DB_SECONDARY_BAD:
+		// A secondary index references a nonexistent primary key. 
+		case DB_RUNRECOVERY:
+		default:
+			LOG(L_CRIT,"sc_query: DB->get error: %s.\n", db_strerror(ret));
+			sclib_recover(_tp,ret);
+			goto error;
+		}
+	}
+
+	return ret;
+	
+error:
+	if(lkey)
+		pkg_free(lkey);
+	if(lres)
+		pkg_free(lres);
+	if(*_r) 
+		sc_free_result(*_r);
+	*_r = NULL;
+	
+	return ret;
+}
+
+
+
+/*
+ * Raw SQL query
+ */
+int sc_raw_query(db_con_t* _h, char* _s, db_res_t** _r)
+{
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("------- Todo: Implement DB RAW QUERY \n");
+	DBG("-------------------------------------------------\n");
+#endif
+	return -1;
+}
+
+/*
+ * Insert a row into table
+ */
+int sc_insert(db_con_t* _h, db_key_t* _k, db_val_t* _v, int _n)
+{
+	tbl_cache_p _tbc = NULL;
+	table_p _tp = NULL;
+	char kbuf[MAX_ROW_SIZE];
+	char dbuf[MAX_ROW_SIZE];
+	int i, j, ret, klen, dlen;
+	int *lkey=NULL;
+	DBT key, data;
+	DB *db;
+	str s;
+
+	i = j = ret = 0;
+	klen=MAX_ROW_SIZE;
+	dlen=MAX_ROW_SIZE;
+
+	if ((!_h) || (!_v) || !CON_TABLE(_h))
+	{	return -1;
+	}
+
+	if (!_k)
+	{
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("------- Todo: Implement DB INSERT w.o KEYs !! \n");
+	DBG("-------------------------------------------------\n");
+#endif
+		return -2;
+	}
+
+	s.s = (char*)CON_TABLE(_h);
+	s.len = strlen(CON_TABLE(_h));
+
+	_tbc = sclib_get_table(SC_CON_CONNECTION(_h), &s);
+	if(!_tbc)
+	{	DBG("sc_insert: table does not exist!\n");
+		return -3;
+	}
+
+	_tp = _tbc->dtp;
+	if(!_tp)
+	{	DBG("sc_insert: table not loaded!\n");
+		return -4;
+	}
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("---------------------------------------------------\n");
+		DBG("------- INSERT in %.*s\n", _tp->name.len, _tp->name.s );
+		DBG("---------------------------------------------------\n");
+#endif
+	
+	db = _tp->db;
+	memset(&key, 0, sizeof(DBT));
+	memset(kbuf, 0, klen);
+	
+	if(_tp->ncols<_n) 
+	{	DBG("sc_insert: more values than columns!!\n");
+		return -5;
+	}
+
+	if(_tp->ncols>_n) 
+	{	DBG("sc_insert: not enough values(%i) to fill the columns(%i) !!\n", _n, _tp->ncols);
+		return -6;
+	}
+	
+
+	lkey = sc_get_colmap(_tp, _k, _n);
+	if(!lkey)  return -7;
+
+	/* verify col types provided */
+	for(i=0; i<_n; i++)
+	{	j = (lkey)?lkey[i]:i;
+		if(sc_is_neq_type(_tp->colp[j]->type, _v[i].type))
+		{
+			DBG("sc_insert: incompatible types v[%d] - c[%d]!\n", i, j);
+			ret = -8;
+			goto error;
+		}
+	}
+	
+	/* make the key */
+	if ( (ret = sclib_valtochar(_tp, lkey, kbuf, &klen, _v, _n, SC_KEY)) != 0 ) 
+	{	LOG(L_ERR, "sc_insert: error in sclib_valtochar  \n");
+		ret = -9;
+		goto error;
+	}
+	
+	key.data = kbuf;
+	key.ulen = MAX_ROW_SIZE;
+	key.flags = DB_DBT_USERMEM;
+	key.size = klen;
+
+	//make the value (row)
+	memset(&data, 0, sizeof(DBT));
+	memset(dbuf, 0, MAX_ROW_SIZE);
+
+	if ( (ret = sclib_valtochar(_tp, lkey, dbuf, &dlen, _v, _n, SC_VALUE)) != 0 ) 
+	{	LOG(L_ERR, "sc_insert: error in sclib_valtochar \n");
+		ret = -9;
+		goto error;
+	}
+
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+	data.size = dlen;
+
+	if ((ret = db->put(db, NULL, &key, &data, 0)) == 0) 
+	{
+		sclib_log(JLOG_INSERT, _tp, dbuf, dlen);
+
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("-- INSERT\n     KEY:  [%.*s]\n     DATA: [%.*s]\n"
+		, (int)   key.size
+		, (char *)key.data
+		, (int)   data.size
+		, (char *)data.data);
+	DBG("-------------------------------------------------\n");
+#endif
+	}
+	else
+	{	/*Berkeley DB error handler*/
+		switch(ret)
+		{
+		/*The following are all critical/fatal */
+		case DB_LOCK_DEADLOCK:	
+		/* The operation was selected to resolve a deadlock. */ 
+		
+		case DB_RUNRECOVERY:
+		default:
+			LOG(L_CRIT, "sc_insert: DB->put error: %s.\n", db_strerror(ret));
+			sclib_recover(_tp, ret);
+			goto error;
+		}
+	}
+
+	return 0;
+	
+error:
+	if(lkey)
+		pkg_free(lkey);
+	
+	return ret;
+
+}
+
+/*
+ * Delete a row from table
+ *
+ * To delete ALL rows:
+ *   do Not specify any keys, or values, and _n <=0
+ *
+ */
+int sc_delete(db_con_t* _h, db_key_t* _k, db_op_t* _op, db_val_t* _v, int _n)
+{
+	tbl_cache_p _tbc = NULL;
+	table_p _tp = NULL;
+	char kbuf[MAX_ROW_SIZE];
+	int i, j, ret, klen;
+	int *lkey=NULL;
+	DBT key;
+	DB *db;
+	DBC *dbcp;
+	str s;
+
+	i = j = ret = 0;
+	klen=MAX_ROW_SIZE;
+
+	if (_op)
+		return ( _sc_delete_cursor(_h, _k, _op, _v, _n) );
+
+	if ((!_h) || !CON_TABLE(_h))
+		return -1;
+
+	s.s = (char*)CON_TABLE(_h);
+	s.len = strlen(CON_TABLE(_h));
+
+	_tbc = sclib_get_table(SC_CON_CONNECTION(_h), &s);
+	if(!_tbc)
+	{	DBG("sc_delete: table does not exist!\n");
+		return -3;
+	}
+
+	_tp = _tbc->dtp;
+	if(!_tp)
+	{	DBG("sc_delete: table not loaded!\n");
+		return -4;
+	}
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("-------------------------------------------------\n");
+		DBG("------- DELETE in %.*s\n", _tp->name.len, _tp->name.s );
+		DBG("-------------------------------------------------\n");
+#endif
+
+	db = _tp->db;
+	memset(&key, 0, sizeof(DBT));
+	memset(kbuf, 0, klen);
+
+	if(!_k || !_v || _n<=0)
+	{
+		/* Acquire a cursor for the database. */
+		if ((ret = db->cursor(db, NULL, &dbcp, DB_WRITECURSOR) ) != 0) 
+		{	LOG(L_ERR, "sc_query: Error creating cursor\n");
+			goto error;
+		}
+		
+		while ((ret = dbcp->c_get(dbcp, &key, NULL, DB_NEXT)) == 0)
+		{
+			if(!strncasecmp((char*)key.data,"METADATA",8)) 
+				continue;
+#ifdef SC_EXTRA_DEBUG
+			DBG("     KEY: [%.*s]\n "
+				, (int)   key.size
+				, (char *)key.data);
+#endif
+			ret = dbcp->c_del(dbcp, 0);
+		}
+		
+		dbcp->c_close(dbcp);
+		return 0;
+	}
+
+	lkey = sc_get_colmap(_tp, _k, _n);
+	if(!lkey)  return -5;
+
+	/* make the key */
+	if ( (ret = sclib_valtochar(_tp, lkey, kbuf, &klen, _v, _n, SC_KEY)) != 0 ) 
+	{	LOG(L_ERR, "sc_delete: error in sclib_makekey  \n");
+		ret = -6;
+		goto error;
+	}
+
+	key.data = kbuf;
+	key.ulen = MAX_ROW_SIZE;
+	key.flags = DB_DBT_USERMEM;
+	key.size = klen;
+
+	if ((ret = db->del(db, NULL, &key, 0)) == 0)
+	{
+		sclib_log(JLOG_DELETE, _tp, kbuf, klen);
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("-------------------------------------------------\n");
+		DBG("-- DELETED ROW \n KEY: %s \n", (char *)key.data);
+		DBG("-------------------------------------------------\n");
+#endif
+	}
+	else
+	{	/*Berkeley DB error handler*/
+		switch(ret){
+			
+		case DB_NOTFOUND:
+			ret = 0;
+			break;
+			
+		/*The following are all critical/fatal */
+		case DB_LOCK_DEADLOCK:	
+		/* The operation was selected to resolve a deadlock. */ 
+		case DB_SECONDARY_BAD:
+		/* A secondary index references a nonexistent primary key. */
+		case DB_RUNRECOVERY:
+		default:
+			LOG(L_CRIT,"sc_delete: DB->del error: %s.\n"
+				, db_strerror(ret));
+			sclib_recover(_tp, ret);
+			goto error;
+		}
+	}
+
+	ret = 0;
+	
+error:
+	if(lkey)
+		pkg_free(lkey);
+	
+	return ret;
+
+}
+
+/*
+_sc_delete_cursor -- called from sc_delete when the query involves operators 
+  other than equal '='. Adds support for queries like this:
+	DELETE from SomeTable WHERE _k[0] < _v[0]
+  In this case, the keys _k are not the actually schema keys, so we need to 
+  iterate via cursor to perform this operation.
+*/
+int _sc_delete_cursor(db_con_t* _h, db_key_t* _k, db_op_t* _op, db_val_t* _v, int _n)
+{
+	tbl_cache_p _tbc = NULL;
+	table_p _tp = NULL;
+	db_res_t* _r   = NULL;
+	char kbuf[MAX_ROW_SIZE];
+	char dbuf[MAX_ROW_SIZE];
+	int i, ret, klen=MAX_ROW_SIZE;
+	DBT key, data;
+	DB *db;
+	DBC *dbcp;
+	int *lkey=NULL;
+	str s;
+	
+	i = ret = 0;
+	
+	if ((!_h) || !CON_TABLE(_h))
+		return -1;
+
+	s.s = (char*)CON_TABLE(_h);
+	s.len = strlen(CON_TABLE(_h));
+
+	_tbc = sclib_get_table(SC_CON_CONNECTION(_h), &s);
+	if(!_tbc)
+	{	DBG("_sc_delete_cursor: table does not exist!\n");
+		return -3;
+	}
+
+	_tp = _tbc->dtp;
+	if(!_tp)
+	{	DBG("_sc_delete_cursor: table not loaded!\n");
+		return -4;
+	}
+	
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("------- DELETE by cursor in %.*s\n", _tp->name.len, _tp->name.s );
+	DBG("-------------------------------------------------\n");
+#endif
+
+	if(_k)
+	{	lkey = sc_get_colmap(_tp, _k, _n);
+		if(!lkey) 
+		{	ret = -1;
+			goto error;
+		}
+	}
+	
+	/* create an empty db_res_t which gets returned even if no result */
+	_r = sc_result_new();
+	if (!_r) 
+	{	LOG(L_ERR, "_sc_delete_cursor: no memory for result \n");
+	}
+	
+	RES_ROW_N(_r) = 0;
+	
+	/* fill in the col part of db_res_t */
+	if ((ret = sc_get_columns(_tp, _r, 0, 0)) != 0) 
+	{	LOG(L_ERR, "_sc_delete_cursor: Error while getting column names\n");
+		goto error;
+	}
+	
+	db = _tp->db;
+	memset(&key, 0, sizeof(DBT));
+	memset(kbuf, 0, klen);
+	memset(&data, 0, sizeof(DBT));
+	memset(dbuf, 0, MAX_ROW_SIZE);
+	
+	data.data = dbuf;
+	data.ulen = MAX_ROW_SIZE;
+	data.flags = DB_DBT_USERMEM;
+	
+	/* Acquire a cursor for the database. */
+	if ((ret = db->cursor(db, NULL, &dbcp, DB_WRITECURSOR)) != 0) 
+	{	LOG(L_ERR, "_sc_delete_cursor: Error creating cursor\n");
+	}
+	
+	while ((ret = dbcp->c_get(dbcp, &key, &data, DB_NEXT)) == 0)
+	{
+		if(!strncasecmp((char*)key.data,"METADATA",8))
+			continue;
+		
+		/*fill in the row part of db_res_t */
+		if ((ret=sc_convert_row( _r, dbuf, 0)) < 0) 
+		{	LOG(L_ERR, "_sc_delete_cursor: Error while converting row\n");
+			goto error;
+		}
+		
+		if(sc_row_match(_k, _op, _v, _n, _r, lkey ))
+		{
+
+#ifdef SC_EXTRA_DEBUG
+			DBG("[_sc_delete_cursor] DELETE ROW by KEY:  [%.*s]\n"
+				, (int) key.size, (char *)key.data);
+#endif
+
+			if((ret = dbcp->c_del(dbcp, 0)) != 0)
+			{	
+				/* Berkeley DB error handler */
+				LOG(L_CRIT,"_sc_delete_cursor: DB->get error: %s.\n"
+					, db_strerror(ret));
+				sclib_recover(_tp,ret);
+			}
+			
+		}
+		
+		sc_free_rows( _r);
+	}
+	ret = 0;
+	
+error:
+	if(dbcp)
+		dbcp->c_close(dbcp);
+	if(_r)
+		sc_free_result(_r);
+	if(lkey)
+		pkg_free(lkey);
+	
+	return ret;
+}
+
+/*
+ * Updates a row in table
+ * Limitation: only knows how to update a single row
+ *
+ * _con: structure representing database connection
+ * _k: key names
+ * _op: operators
+ * _v: values of the keys that must match
+ * _uk: update keys; cols that need to be updated 
+ * _uv: update values; col values that need to be commited
+ * _un: number of rows to update
+ */
+int sc_update(db_con_t* _con, db_key_t* _k, db_op_t* _op, db_val_t* _v,
+	      db_key_t* _uk, db_val_t* _uv, int _n, int _un)
+{
+	str s;
+	char *c, *t;
+	int ret, i, qcol, len, sum;
+	int *lkey=NULL;
+	tbl_cache_p _tbc = NULL;
+	table_p _tp = NULL;
+	char kbuf[MAX_ROW_SIZE];
+	char qbuf[MAX_ROW_SIZE];
+	char ubuf[MAX_ROW_SIZE];
+	DBT key, qdata, udata;
+	DB *db;
+	
+	sum = ret = i = qcol = len = 0;
+	
+	if (!_con || !CON_TABLE(_con) || !_uk || !_uv || _un <= 0)
+		return -1;
+	
+	s.s = (char*)CON_TABLE(_con);
+	s.len = strlen(CON_TABLE(_con));
+
+	_tbc = sclib_get_table(SC_CON_CONNECTION(_con), &s);
+	if(!_tbc)
+	{	LOG(L_ERR, "ERROR: sc_update:: table does not exist\n");
+		return -1;
+	}
+
+	_tp = _tbc->dtp;
+	if(!_tp)
+	{	LOG(L_ERR, "ERROR: sc_update:: table not loaded\n");
+		return -1;
+	}
+	
+	db = _tp->db;
+	if(!db)
+	{	LOG(L_ERR, "ERROR: sc_update:: DB null ptr\n");
+		return -1;
+	}
+	
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("-- UPDATE in %.*s\n", _tp->name.len, _tp->name.s);
+	DBG("-------------------------------------------------\n");
+	if (_op) DBG("sc_update: DONT-CARE : _op: operators for refining query \n");
+#endif
+	
+	memset(&key, 0, sizeof(DBT));
+	memset(kbuf, 0, MAX_ROW_SIZE);
+	memset(&qdata, 0, sizeof(DBT));
+	memset(qbuf, 0, MAX_ROW_SIZE);
+	
+	qdata.data = qbuf;
+	qdata.ulen = MAX_ROW_SIZE;
+	qdata.flags = DB_DBT_USERMEM;
+	
+	if(_k)
+	{	lkey = sc_get_colmap(_tbc->dtp, _k, _n);
+		if(!lkey) return -4;
+	}
+	else
+	{
+		LOG(L_ERR, "ERROR: sc_update:: Null keys in update _k=0 \n");
+		return -1;
+	}
+	
+	len = MAX_ROW_SIZE;
+	
+	if ( (ret = sclib_valtochar(_tp, lkey, kbuf, &len, _v, _n, SC_KEY)) != 0 ) 
+	{	LOG(L_ERR, "sc_update: error in query key \n");
+		goto cleanup;
+	}
+	
+	if(lkey) pkg_free(lkey);
+	
+	key.data = kbuf;
+	key.ulen = MAX_ROW_SIZE;
+	key.flags = DB_DBT_USERMEM;
+	key.size = len;
+	
+	/*stage 1: QUERY Berkely DB*/
+	if ((ret = db->get(db, NULL, &key, &qdata, 0)) == 0) 
+	{
+
+#ifdef SC_EXTRA_DEBUG
+		DBG("---1 uRESULT\n     KEY:  [%.*s]\n     DATA: [%.*s]\n"
+			, (int)   key.size
+			, (char *)key.data
+			, (int)   qdata.size
+			, (char *)qdata.data);
+#endif
+
+	}
+	else
+	{	goto db_error;
+	}
+	
+	/* stage 2: UPDATE row with new values */
+	
+	/* map the provided keys to those in our schema */ 
+	lkey = sc_get_colmap(_tbc->dtp, _uk, _un);
+	if(!lkey) return -4;
+	
+	/* build a new row for update data (udata) */
+	memset(&udata, 0, sizeof(DBT));
+	memset(ubuf, 0, MAX_ROW_SIZE);
+	
+	/* loop over each column of the qbuf and copy it to our new ubuf unless
+	   its a field that needs to update
+	*/
+	c = strtok(qbuf, DELIM);
+	t = ubuf;
+	while( c!=NULL)
+	{	char* delim = DELIM;
+		int k;
+		
+		len = strlen(c);
+		sum+=len;
+		
+		if(sum > MAX_ROW_SIZE)
+		{	LOG(L_ERR, "sc_update: value too long for string \n");
+			ret = -3;
+			goto cleanup;
+		}
+		
+		for(i=0;i<_un;i++)
+		{
+			k = lkey[i];
+			if (qcol == k)
+			{	/* update this col */
+				int j = MAX_ROW_SIZE - sum;
+				if( sc_val2str( &_uv[i], t, &j) )
+				{	LOG(L_ERR, "sc_update: value too long for string \n");
+					ret = -3;
+					goto cleanup;
+				}
+
+				goto next;
+			}
+			
+		}
+		
+		/* copy original column to the new column */
+		strncpy(t, c, len);
+
+next:
+		t+=len;
+		
+		/* append DELIM */
+		sum += DELIM_LEN;
+		if(sum > MAX_ROW_SIZE)
+		{	LOG(L_ERR, "sc_update: value too long for string \n");
+			ret = -3;
+			goto cleanup;
+		}
+		
+		strncpy(t, delim, DELIM_LEN);
+		t += DELIM_LEN;
+		
+		c = strtok(NULL, DELIM);
+		qcol++;
+	}
+	
+	ubuf[sum]  = '0';
+	udata.data = ubuf;
+	udata.ulen  = MAX_ROW_SIZE;
+	udata.flags = DB_DBT_USERMEM;
+	udata.size  = sum;
+
+#ifdef SC_EXTRA_DEBUG
+	DBG("---2 MODIFIED Data\n     KEY:  [%.*s]\n     DATA: [%.*s]\n"
+		, (int)   key.size
+		, (char *)key.data
+		, (int)   udata.size
+		, (char *)udata.data);
+#endif
+	/* stage 3: DELETE old row using key*/
+	if ((ret = db->del(db, NULL, &key, 0)) == 0)
+	{
+#ifdef SC_EXTRA_DEBUG
+		DBG("---3 uDELETED ROW \n KEY: %s \n", (char *)key.data);
+#endif
+	}
+	else
+	{	goto db_error;
+	}
+	
+	/* stage 4: INSERT new row with key*/
+	if ((ret = db->put(db, NULL, &key, &udata, 0)) == 0) 
+	{
+		sclib_log(JLOG_UPDATE, _tp, ubuf, sum);
+#ifdef SC_EXTRA_DEBUG
+	DBG("---4 INSERT \n     KEY:  [%.*s]\n     DATA: [%.*s]\n"
+		, (int)   key.size
+		, (char *)key.data
+		, (int)   udata.size
+		, (char *)udata.data);
+#endif
+	}
+	else
+	{	goto db_error;
+	}
+
+#ifdef SC_EXTRA_DEBUG
+	DBG("-------------------------------------------------\n");
+	DBG("-- UPDATE COMPLETE \n");
+	DBG("-------------------------------------------------\n");
+#endif
+
+
+cleanup:
+	if(lkey)
+		pkg_free(lkey);
+	
+	return ret;
+
+
+db_error:
+
+	/*Berkeley DB error handler*/
+	switch(ret)
+	{
+	
+	case DB_NOTFOUND:
+	
+#ifdef SC_EXTRA_DEBUG
+		DBG("------------------------------\n");
+		DBG("--- NO RESULT \n");
+		DBG("------------------------------\n");
+#endif
+		return -1;
+	
+	/* The following are all critical/fatal */
+	case DB_LOCK_DEADLOCK:	
+	/* The operation was selected to resolve a deadlock. */
+	case DB_SECONDARY_BAD:
+	/* A secondary index references a nonexistent primary key.*/ 
+	case DB_RUNRECOVERY:
+	default:
+		LOG(L_CRIT,"sc_update: DB->get error: %s.\n", db_strerror(ret));
+		sclib_recover(_tp,ret);
+	}
+	
+	if(lkey)
+		pkg_free(lkey);
+	
+	return ret;
+}

+ 96 - 0
modules/db_berkeley/db_berkeley.h

@@ -0,0 +1,96 @@
+/*
+ * $Id$
+ *
+ * db_berkeley module, portions of this code were templated using
+ * the dbtext and postgres modules.
+
+ * Copyright (C) 2007 Cisco Systems
+ *
+ * This file is part of openser, a free SIP server.
+ *
+ * openser is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version
+ *
+ * openser is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License 
+ * along with this program; if not, write to the Free Software 
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * 
+ * History:
+ * --------
+ * 2007-09-19  genesis (wiquan)
+ */
+
+
+#ifndef _BDB_H_
+#define _BDB_H_
+
+#include "../../db/db_con.h"
+#include "../../db/db_res.h"
+#include "../../db/db_key.h"
+#include "../../db/db_op.h"
+#include "../../db/db_val.h"
+
+/* reloads the berkeley db */
+void sc_reload(char* _n);
+
+void sc_check_reload(db_con_t* _con);
+int  sc_use_table(db_con_t* _h, const char* _t);
+
+/*
+ * Initialize database connection
+ */
+db_con_t* sc_init(const char* _sqlurl);
+
+
+/*
+ * Close a database connection
+ */
+void sc_close(db_con_t* _h);
+
+
+/*
+ * Free all memory allocated by get_result
+ */
+int sc_free_query(db_con_t* _h, db_res_t* _r);
+
+
+/*
+ * Do a query
+ */
+int sc_query(db_con_t* _h, db_key_t* _k, db_op_t* _op, db_val_t* _v, 
+			db_key_t* _c, int _n, int _nc, db_key_t _o, db_res_t** _r);
+
+
+/*
+ * Raw SQL query
+ */
+int sc_raw_query(db_con_t* _h, char* _s, db_res_t** _r);
+
+
+/*
+ * Insert a row into table
+ */
+int sc_insert(db_con_t* _h, db_key_t* _k, db_val_t* _v, int _n);
+
+
+/*
+ * Delete a row from table
+ */
+int sc_delete(db_con_t* _h, db_key_t* _k, db_op_t* _o, db_val_t* _v, int _n);
+int _sc_delete_cursor(db_con_t* _h, db_key_t* _k, db_op_t* _op, db_val_t* _v, int _n);
+
+/*
+ * Update a row in table
+ */
+int sc_update(db_con_t* _h, db_key_t* _k, db_op_t* _o, db_val_t* _v,
+	      db_key_t* _uk, db_val_t* _uv, int _n, int _un);
+
+#endif
+

+ 55 - 0
modules/db_berkeley/doc/db_berkeley.sgml

@@ -0,0 +1,55 @@
+<!DOCTYPE Book PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
+
+
+<!ENTITY user SYSTEM "db_berkeley_user.sgml">
+<!ENTITY devel SYSTEM "db_berkeley_devel.sgml">
+<!ENTITY faq SYSTEM "db_berkeley_faq.sgml">
+
+<!-- Include general documentation entities -->
+<!ENTITY % docentities SYSTEM "../../../doc/entities.sgml">
+%docentities;
+
+]>
+
+<book>
+    <bookinfo>
+	<title>Berkeley DB Module</title>
+	<productname class="trade">&sername;</productname>
+	<authorgroup>
+	    <author>
+		<firstname>Will</firstname>
+		<surname>Quan</surname>
+		<affiliation><orgname>Cisco Systems</orgname></affiliation>
+		<address>
+		<email>[email protected]</email>
+		<otheraddr>
+		<ulink url="http://www.cisco.com">http://www.cisco.com</ulink>
+		</otheraddr>
+		</address>
+	    </author>
+	    <editor>
+		<firstname>Will</firstname>
+		<surname>Quan</surname>
+		<address>
+		    <email>[email protected]</email>
+		</address>
+	    </editor>
+	</authorgroup>
+	<copyright>
+	    <year>2007</year>
+	    <holder>Cisco Systems</holder>
+	</copyright>
+	<revhistory>
+	    <revision>
+		<revnumber>$Revision: 846 $</revnumber>
+		<date>$Date: 2006-05-22 09:15:40 -0500 (Mon, 22 May 2006) $</date>
+	    </revision>
+	</revhistory>
+    </bookinfo>
+    <toc></toc>
+    
+    &user;
+    &devel;
+    &faq;
+    
+</book>

+ 22 - 0
modules/db_berkeley/doc/db_berkeley_devel.sgml

@@ -0,0 +1,22 @@
+<!-- Module Developer's Guide -->
+
+<chapter>
+    <chapterinfo>
+	<revhistory>
+	    <revision>
+		<revnumber>$Revision: 846 $</revnumber>
+		<date>$Date: 2006-05-22 09:15:40 -0500 (Mon, 22 May 2006) $</date>
+	    </revision>
+	</revhistory>
+    </chapterinfo>
+    <title>Developer's Guide</title>
+    <para>
+	The module does not provide any <acronym>API</acronym> to use in other &ser; modules.
+    </para>
+</chapter>
+
+<!-- Keep this element at the end of the file
+Local Variables:
+sgml-parent-document: ("db_berkeley.sgml" "book" "chapter")
+End:
+-->

+ 70 - 0
modules/db_berkeley/doc/db_berkeley_faq.sgml

@@ -0,0 +1,70 @@
+<!-- Module FAQ -->
+
+<chapter>
+    <chapterinfo>
+	<revhistory>
+	    <revision>
+		<revnumber>$Revision: 846 $</revnumber>
+		<date>$Date: 2006-05-22 09:15:40 -0500 (Mon, 22 May 2006) $</date>
+	    </revision>
+	</revhistory>
+    </chapterinfo>
+    <title>Frequently Asked Questions</title>
+    <qandaset defaultlabel="number">
+	<qandaentry>
+	    <question>
+		<para>Where can I find more about OpenSER?</para>
+	    </question>
+	    <answer>
+		<para>
+			Take a look at &serhomelink;.
+		</para>
+	    </answer>
+	</qandaentry>
+	<qandaentry>
+	    <question>
+		<para>Where can I post a question about this module?</para>
+	    </question>
+	    <answer>
+		<para>
+			First at all check if your question was already answered on one of
+			our mailing lists: 
+		</para>
+		<itemizedlist>
+		    <listitem>
+			<para>User Mailing List - &seruserslink;</para>
+		    </listitem>
+		    <listitem>
+			<para>Developer Mailing List - &serdevlink;</para>
+		    </listitem>
+		</itemizedlist>
+		<para>
+			E-mails regarding any stable &ser; release should be sent to 
+			&serusersmail; and e-mails regarding development versions
+			should be sent to &serdevmail;.
+		</para>
+		<para>
+			If you want to keep the mail private, send it to 
+			&serhelpmail;.
+		</para>
+	    </answer>
+	</qandaentry>
+	<qandaentry>
+	    <question>
+		<para>How can I report a bug?</para>
+	    </question>
+	    <answer>
+		<para>
+			Please follow the guidelines provided at:
+			&serbugslink;.
+		</para>
+	    </answer>
+	</qandaentry>
+    </qandaset>
+</chapter>
+
+<!-- Keep this element at the end of the file
+Local Variables:
+sgml-parent-document: ("db_berkeley.sgml" "Book" "chapter")
+End:
+-->

+ 522 - 0
modules/db_berkeley/doc/db_berkeley_user.sgml

@@ -0,0 +1,522 @@
+<!-- Module User's Guide -->
+
+<chapter>
+	<chapterinfo>
+	<revhistory>
+		<revision>
+		<revnumber>$Revision: 846 $</revnumber>
+		<date>$Date: 2006-05-22 09:15:40 -0500 (Mon, 22 May 2006) $</date>
+		</revision>
+	</revhistory>
+	</chapterinfo>
+	<title>User's Guide</title>
+	
+	<section>
+	<title>Overview</title>
+	<para>
+		This is a module which integrates the Berkeley DB into OpenSER.
+		It implements the DB API defined in OpenSER.
+	</para>
+	</section>
+
+	<section>
+	<title>Dependencies</title>
+	<section>
+		<title>&ser; Modules</title>
+		<para>
+		The following modules must be loaded before this module:
+			<itemizedlist>
+			<listitem>
+			<para>
+				<emphasis>No dependencies on other &ser; modules</emphasis>.
+			</para>
+			</listitem>
+			</itemizedlist>
+		</para>
+	</section>
+	
+	<section>
+		<title>External Libraries or Applications</title>
+		<para>
+		The following libraries or applications must be installed before running
+		&ser; with this module loaded:
+			<itemizedlist>
+			<listitem>
+			<para>
+				<emphasis>Berkeley Berkeley DB 4.5</emphasis> - an embedded database.
+			</para>
+			</listitem>
+			</itemizedlist>
+		</para>
+	</section>
+	</section>
+	<section>
+	<title>Exported Parameters</title>
+	<section>
+		<title><varname>auto_reload</varname> (integer)</title>
+		<para>
+		The auto-reload will close and reopen a Berkeley DB when the
+		files inode has changed. The operation occurs only duing a query. 
+		Other operations such as insert or delete, do not invoke auto_reload.
+		</para>
+		<para>
+		<emphasis>
+			Default value is 0 (1 - on / 0 - off).
+		</emphasis>
+		</para>
+		<example>
+		<title>Set <varname>auto_reload</varname> parameter</title>
+		<programlisting format="linespecific">
+...
+modparam("db_berkeley", "auto_reload", 1)
+...
+		</programlisting>
+		</example>
+	</section>
+	
+	<section>
+		<title><varname>log_enable</varname> (integer)</title>
+		<para>
+		The log_enable boolean controls when to create journal files.
+		The following operations can be journaled: 
+		INSERT, UPDATE, DELETE. Other operations such as SELECT, do not. 
+		This journaling are required if you need to recover from a corrupt 
+		DB file. That is, bdb_recover requires these to rebuild 
+		the db file. If you find this log feature useful, you may 
+		also be interested in the METADATA_LOGFLAGS bitfield that each 
+		table has. It will allow you to control which operations to 
+		journal, and the destination (like syslog, stdout, local-file). 
+		Refer to sclib_log()  and documentation on METADATA.
+		</para>
+		<para>
+		<emphasis>
+			Default value is 0 (1 - on / 0 - off).
+		</emphasis>
+		</para>
+		<example>
+		<title>Set <varname>log_enable</varname> parameter</title>
+		<programlisting format="linespecific">
+...
+modparam("db_berkeley", "log_enable", 1)
+...
+		</programlisting>
+		</example>
+	</section>
+	
+	<section>
+		<title><varname>journal_roll_interval</varname> (integer seconds)</title>
+		<para>
+		The journal_roll_interval will close and open a new log file. 
+		The roll operation occurs only at the end of writing a log, 
+		so it is not guaranteed to to roll 'on time'.
+		</para>
+		<para>
+		<emphasis>
+			Default value is 0 (off).
+		</emphasis>
+		</para>
+		<example>
+		<title>Set <varname>journal_roll_interval</varname> parameter</title>
+		<programlisting format="linespecific">
+...
+modparam("db_berkeley", "journal_roll_interval", 3600)
+...
+		</programlisting>
+		</example>
+	</section>
+	
+	</section>
+	
+	<section>
+	<title>Exported Functions</title>
+		<para>
+		No function exported to be used from configuration file.
+		</para>
+	</section>
+	
+	<section>
+	<title>Installation and Running</title>
+		<para>
+		First download, compile and install the Berkeley DB. This is 
+		outside the scope of this document. Documentation for this 
+		procedure is available on the Internet.
+		</para>
+		
+		<para>
+		Next, we setup to compile OpenSER with the db_berkeley module. 
+		In the directory /modules/db_berkeley , modify the Makefile to point 
+		to your distribution of Berkeley DB.
+		</para>
+		
+		<para>
+		You may also define 'SC_EXTRA_DEBUG' to compile in extra debug logs. 
+		However, it is not a recommended deployment to production servers.
+		Because the module dependes on an external library, the db_berkeley module is not
+		compiled and installed by default. You can use one of the next options.
+		</para>
+		
+		<itemizedlist>
+			<listitem>
+			<para>
+			edit the "Makefile" and remove "db_berkeley" from "excluded_modules"
+			list. Then follow the standard procedure to install &ser;:
+			"make all; make install".
+			</para>
+			</listitem>
+			<listitem>
+			<para>
+			from command line use: 'make all include_modules="db_berkeley";
+			make install include_modules="db_berkeley"'.
+			</para>
+			</listitem>
+		</itemizedlist>
+		
+		<para>
+		Installation of OpenSER is performed by simply running make install
+		as root user of the main directory. This will install the binaries in /usr/local/sbin/.
+		If this was successful, the scripts/db_berkeley.sh file should now 
+		be installed as /usr/local/sbin/openser_db_berkeley.sh 
+		</para>
+		
+		<para>
+		Once you decide where you want to install the Berkeley DB files, 
+		for instance '/var/db_berkeley/bdb', we must initially create 
+		the files there. OpenSER will not startup without these DB files 
+		already existing. Here are a couple of ways to do this:
+		</para>
+		
+		<example>
+		<title>1</title>
+		<programlisting format="linespecific">
+export DB_HOME=/var/db_berkeley/bdb ; /usr/local/sbin/openser_db_berkeley.sh create
+		</programlisting>
+		</example>
+
+		<para>
+		This way, any later operations with openser_db_berkeley.sh will not require 
+		you to provide the path to your DB files.
+		Alternately, you can specify them on the command line like this:
+		</para>
+		
+		<example>
+		<title>2</title>
+		<programlisting format="linespecific">
+/usr/local/sbin/openser_db_berkeley.sh create /var/db_berkeley/bdb
+		</programlisting>
+		</example>
+		
+		<para>
+		After this creation step, the DB files are now seeded with the 
+		necessary meta-data for OpenSER to startup. For a description of 
+		the meta-data refer to the section about db_berkeley.sh operations.
+		Modify the OpenSER configuration file to use db_berkeley. The 
+		database URL for modules must be the path to the directory where 
+		the Berkeley DB table-files are located, prefixed by "db_berkeley://", 
+		e.g., "db_berkeley:///var/db_berkeley/bdb". If you require the DB file 
+		to automatically reload be sure to include the modparam line for that.
+		</para>
+		
+		<para>
+		A couple other things to consider are the 'db_mode' and the 'use_domain' 
+		modparams, as they will impact things as well. The best description of 
+		these parameters are found in usrloc documentation.
+		</para>
+		
+		<para>
+		The '|' pipe character is used as a record delimiter within this 
+		Berkeley DB implementation and must not be present in any DB field.
+		</para>
+	</section>
+	
+	<section>
+	<title>Database Schema and Metadata</title>
+	
+	<para>
+	Each Berkeley DB is must be manually, initially created via the 
+	openser_db_berkeley.sh maintenance utility. This section provides some 
+	details as to the content and format of the DB file upon creation.
+	</para>
+
+	<para>
+	Since the Berkeley DB stores key value pairs, the database is seeded 
+	with a few meta-data rows . The keys to these rows must begin with 'METADATA'. 
+	Here is an example of table meta-data, taken from the table 'version'.
+	</para>
+
+	<example>
+	<title>3</title>
+	<programlisting format="linespecific">
+METADATA_COLUMNS
+table_name(str) table_version(int)
+METADATA_KEY
+0
+	</programlisting>
+	</example>
+
+	<para>
+	In the above example, the row METADATA_COLUMNS defines the column names 
+	and type, and the row METADATA_KEY defines which column(s) form the key. 
+	Here the value of 0 indicates that column 0 is the key(ie table_name). 
+	With respect to column types, the db_berkeley modules only has the following 
+	types: string, str, int, double, and datetime. The default type is string, 
+	and is used when one of the others is not specified. The columns of the 
+	meta-data are delimited by whitespace.
+	</para>
+
+	<para>
+	The actual column data is stored as a string value, and delimited by 
+	the '|' pipe character. Since the code tokenizes on this delimiter, 
+	it is important that this character not appear in any valid data field. 
+	The following is the output of the 'db_berkeley.sh dump version' command. 
+	It shows contents of table 'version' in plain text.
+	</para>
+	
+	<example>
+	<title>contents of version table</title>
+	<programlisting format="linespecific">
+VERSION=3
+format=print
+type=hash
+h_nelem=21
+db_pagesize=4096
+HEADER=END
+ METADATA_READONLY
+ 1
+ address|
+ address|3
+ aliases|
+ aliases|1004
+ dbaliases|
+ dbaliases|1
+ domain|
+ domain|1
+ gw_grp|
+ gw_grp|1
+ gw|
+ gw|4
+ speed_dial|
+ speed_dial|2
+ subscriber|
+ subscriber|6
+ uri|
+ uri|1
+ METADATA_COLUMNS
+ table_name(str) table_version(int)
+ METADATA_KEY
+ 0
+ acc|
+ acc|4
+ grp|
+ grp|2
+ lcr|
+ lcr|2
+ location|
+ location|1004
+ missed_calls|
+ missed_calls|3
+ re_grp|
+ re_grp|1
+ silo|
+ silo|5
+ trusted|
+ trusted|4
+ usr_preferences|
+ usr_preferences|2
+DATA=END
+	</programlisting>
+	</example>
+	</section>
+	
+	<section>
+	<title>METADATA_COLUMNS (required)</title>
+	<para>
+	The METADATA_COLUMNS row contains the column names and types. 
+	Each is space delimited. Here is an example of the data taken from table subscriber :
+	</para>
+	
+	<example>
+	<title>METADATA_COLUMNS</title>
+	<programlisting>
+METADATA_COLUMNS
+username(str) domain(str) password(str) ha1(str) ha1b(str) first_name(str) last_name(str) email_address(str) datetime_created(datetime) timezone(str) rpid(str)
+ 	</programlisting>
+	</example>
+	
+	<para>
+	Related (hardcoded) limitations:
+	<itemizedlist>
+		<listitem>
+			<para>maximum of 32 columns per table.</para>
+		</listitem>
+		
+		<listitem>
+			<para>maximum tablename size is 64.</para>
+		</listitem>
+		
+		<listitem>
+			<para>maximum data length is 2048</para>
+		</listitem>
+	</itemizedlist>
+	</para>
+	
+	<para>
+	Currently supporting these five types: str, datetime, int, double, string.
+	</para>
+	
+</section>
+
+	<section>
+	<title>METADATA_KEYS (required)</title>
+	<para>
+	The METADATA_KEYS row indicates the indexes of the key columns, 
+	with respect to the order specified in METADATA_COLUMNS. 
+	Here is an example taken from table subscriber that brings up a good point:
+	</para>
+	
+	<example>
+	<title>METADATA_KEYS</title>
+	<programlisting>
+ METADATA_KEY
+ 0 1
+ 	</programlisting>
+	</example>
+
+ 	<para>
+	The point is that both the username and domain name are require 
+	as the key to this record. Thus, usrloc modparam 
+	use_domain = 1 must be set for this to work.
+	</para>
+	
+	</section>
+
+	<section>
+	<title>METADATA_READONLY (optional)</title>
+	<para>
+	The METADATA_READONLY row contains a boolean 0 or 1. 
+	By default, its value is 0. On startup the DB will 
+	open initially as read-write (loads metadata) and then if this 
+	is set=1, it will close and reopen as read only (ro). 
+	I found this useful because readonly has impacts on the 
+	internal db locking etc.
+	</para>
+	
+	</section>
+
+	<section>
+	<title>METADATA_LOGFLAGS (optional)</title>
+	<para>
+	The METADATA_LOGFLAGS row contains a bitfield that customizes the 
+	journaling on a per table basis. If not present the default value 
+	is taken as 0. Here are the masks so far (taken from sc_lib.h):
+	</para>
+	
+	<example>
+	<title>METADATA_LOGFLAGS</title>
+	<programlisting>
+#define JLOG_NONE 0
+#define JLOG_INSERT 1
+#define JLOG_DELETE 2
+#define JLOG_UPDATE 4
+#define JLOG_STDOUT 8
+#define JLOG_SYSLOG 16
+	</programlisting>
+	</example>
+	
+	<para>
+	This means that if you want to journal INSERTS to local file and syslog the value 
+	should be set to 1+16=17. Or if you do not want to journal at all, set this to 0.
+	</para>
+	
+	</section>
+	
+	<section>
+	<title>Maintaince Shell Script : db_berkeley.sh </title>
+	<para>
+	The db_berkeley.sh is located in the [openser_root_dir]/scripts directory. 
+	The script will print help when invoked without parameters on the 
+	command line. The following is the help text.
+	</para>
+	
+	<para>
+	Script for maintaining OpenSER Berkeley DB tables
+	<example>
+	<title>db_berkeley.sh usageS</title>
+	<programlisting>
+usage: db_berkeley.sh create   [DB_HOME] (creates the db with files with metadata)
+       db_berkeley.sh presence [DB_HOME] (adds the presence related tables)
+       db_berkeley.sh extra    [DB_HOME] (adds the extra tables - imc,cpl,siptrace,domainpolicy)
+       db_berkeley.sh drop     [DB_HOME] (deletes db files in DB_HOME)
+       db_berkeley.sh reinit   [DB_HOME] (drop and create tables in one step)
+       db_berkeley.sh list     [DB_HOME] (lists the underlying db files on the FS)
+       db_berkeley.sh backup   [DB_HOME] (tars current database)
+       db_berkeley.sh restore   bu [DB_HOME] (untar bu into DB_HOME)
+       db_berkeley.sh dump      db [DB_HOME] (db_dump the underlying db file to STDOUT)
+       db_berkeley.sh swap      db [DB_HOME] (installs db.new by db -> db.old; db.new -> db)
+       db_berkeley.sh newappend db datafile [DB_HOME] (appends data to a new instance of db; output DB_HOME/db.new)
+	</programlisting>
+	</example>
+	</para>
+	</section>
+	
+	<section>
+	<title>DB Recovery : bdb_recover</title>
+	<para>
+	The db_berkeley module uses the Concurrent Data Store (CDS) architecture. 
+	As such, no transaction or journaling is provided by the DB natively. 
+	The application bdb_recover is specifically written to recover data from 
+	journal files that OpenSER creates.  
+	The bdb_recover application requires an additional text file that contains 
+	the table schema.
+	</para>
+	
+	<para>
+	The schema is loaded with the '-s' option and is required for all operations.
+	</para>
+	
+	<para>
+	The '-h' home option is the DB_HOME path. Unlike the Berkeley utilities, 
+	this application does not look for the DB_HOME environment variable, 
+	so you have to specify it. If not specified, it will assume the current 
+	working directory. The last argument is the operation. 
+	There are fundamentally only two operations- create and recover. 
+	</para>
+	
+	<para>
+	The following illustrates the four operations available to the administrator.
+	<example>
+	<title>bdb_recover usage</title>
+	<programlisting>
+usage: ./bdb_recover -s schemafile [-h home] [-c tablename]
+	This will create a brand new DB file with metadata.
+
+usage: ./bdb_recover -s schemafile [-h home] [-C all]
+	This will create all the core tables, each with metadata.
+
+usage: ./bdb_recover -s schemafile [-h home] [-r journal-file]
+	This will rebuild a DB and populate it with operation from journal-file. 
+	The table name is embedded in the journal-file name by convention.
+
+usage: ./bdb_recover -s schemafile [-h home] [-R lastN]
+	This will iterate over all core tables enumerated. If journal files exist in 'home', 
+	a new DB file will be created and populated with the data found in the last N files. 
+	The files are 'replayed' in chronological order (oldest to newest). This 
+	allows the administrator to rebuild the db with a subset of all possible 
+	operations if needed. For example, you may only be interested in 
+	the last hours data in table location.
+	</programlisting>
+	</example>
+	</para>
+	
+	<para>
+	It is important to note that the corrupted DB file must be moved 
+	out of the way before bdb_recover is executed.
+	</para>
+	
+	</section>
+</chapter>
+
+<!-- Keep this element at the end of the file
+Local Variables:
+sgml-parent-document: "db_berkeley.sgml" "Book" "chapter")
+End:
+-->