فهرست منبع

- fix link entity names, patch from Carsten Gross
- regenerate READMEs


git-svn-id: https://openser.svn.sourceforge.net/svnroot/openser/trunk@4630 689a6050-402a-0410-94f2-e92a70836424

Henning Westerholt 17 سال پیش
والد
کامیت
e08c9c4f52
1فایلهای تغییر یافته به همراه73 افزوده شده و 76 حذف شده
  1. 73 76
      modules/db_berkeley/km_README

+ 73 - 76
modules/db_berkeley/km_README

@@ -1,4 +1,3 @@
-
 Berkeley DB Module
 
 Will Quan
@@ -12,8 +11,8 @@ Will Quan
    Copyright © 2007 Cisco Systems
    Revision History
    Revision $Revision: 846 $ $Date: 2006-05-22 09:15:40 -0500
-   (Mon, 22 May 2006) $
-     _________________________________________________________
+                             (Mon, 22 May 2006) $
+     __________________________________________________________
 
    Table of Contents
 
@@ -42,7 +41,7 @@ Will Quan
         1.9. METADATA_KEYS (required)
         1.10. METADATA_READONLY (optional)
         1.11. METADATA_LOGFLAGS (optional)
-        1.12. DB Maintaince Script : kamdbctl 
+        1.12. DB Maintaince Script : kamdbctl
         1.13. DB Recovery : kambdb_recover
         1.14. Known Limitations
 
@@ -85,10 +84,10 @@ Chapter 1. Admin Guide
 
    The auto-reload will close and reopen a Berkeley DB when the
    files inode has changed. The operation occurs only duing a
-   query. Other operations such as insert or delete, do not
-   invoke auto_reload.
+   query. Other operations such as insert or delete, do not invoke
+   auto_reload.
 
-   Default value is 0 (1 - on / 0 - off). 
+   Default value is 0 (1 - on / 0 - off).
 
    Example 1.1. Set auto_reload parameter
 ...
@@ -101,14 +100,14 @@ modparam("db_berkeley", "auto_reload", 1)
    The following operations can be journaled: INSERT, UPDATE,
    DELETE. Other operations such as SELECT, do not. This
    journaling are required if you need to recover from a corrupt
-   DB file. That is, kambdb_recover requires these to rebuild the db
-   file. If you find this log feature useful, you may also be
+   DB file. That is, kambdb_recover requires these to rebuild the
+   db file. If you find this log feature useful, you may also be
    interested in the METADATA_LOGFLAGS bitfield that each table
    has. It will allow you to control which operations to journal,
-   and the destination (like syslog, stdout, local-file). Refer
-   to bdblib_log() and documentation on METADATA.
+   and the destination (like syslog, stdout, local-file). Refer to
+   bdblib_log() and documentation on METADATA.
 
-   Default value is 0 (1 - on / 0 - off). 
+   Default value is 0 (1 - on / 0 - off).
 
    Example 1.2. Set log_enable parameter
 ...
@@ -121,7 +120,7 @@ modparam("db_berkeley", "log_enable", 1)
    The roll operation occurs only at the end of writing a log, so
    it is not guaranteed to to roll 'on time'.
 
-   Default value is 0 (off). 
+   Default value is 0 (off).
 
    Example 1.3. Set journal_roll_interval parameter
 ...
@@ -148,7 +147,7 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
    Parameters: tablename (or db_path); to reload a particular
    table provide the tablename as the arguement (eg subscriber);
    to reload all tables provide the db_path to the db files. The
-   path can be found in kamctlrc DB_PATH variable. 
+   path can be found in kamctlrc DB_PATH variable.
 
 1.6. Installation and Running
 
@@ -166,8 +165,8 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
    db_berkeley module is not compiled and installed by default.
    You can use one of the next options.
      * edit the "Makefile" and remove "db_berkeley" from
-       "excluded_modules" list. Then follow the standard
-       procedure to install Kamailio: "make all; make install".
+       "excluded_modules" list. Then follow the standard procedure
+       to install Kamailio: "make all; make install".
      * from command line use: 'make all
        include_modules="db_berkeley"; make install
        include_modules="db_berkeley"'.
@@ -189,7 +188,7 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
    properly: DBENGINE and DB_PATH. Edit file:
    '/usr/local/etc/kamailio/kamctlrc'
                 ## database type: MYSQL, PGSQL, DB_BERKELEY, or DBTEXT,
- by default none is loaded
+by default none is loaded
                 # DBENGINE=DB_BERKELEY
 
                 ## database path used by dbtext or db_berkeley
@@ -201,17 +200,17 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
    making modifications to your tables dbschema. By default, the
    files are installed in
    '/usr/local/share/kamailio/db_berkeley/openser' By default
-   these tables are created Read/Write and without any
-   journalling as shown. These settings can be modified on a per
-   table basis. Note: If you plan to use kambdb_recover, you must
-   change the LOGFLAGS.
+   these tables are created Read/Write and without any journalling
+   as shown. These settings can be modified on a per table basis.
+   Note: If you plan to use kambdb_recover, you must change the
+   LOGFLAGS.
                 METADATA_READONLY
                 0
                 METADATA_LOGFLAGS
                 0
 
-   Execute kamdbctl - There are three (3) groups of tables you
-   may need depending on your situation.
+   Execute kamdbctl - There are three (3) groups of tables you may
+   need depending on your situation.
                 kamdbctl create                 (required)
                 kamdbctl presence               (optional)
                 kamdbctl extra                  (optional)
@@ -234,8 +233,8 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
    effort'. So if the hard drive becomes full, the attempt to
    write a journal entry may fail.
 
-   Note on use_domain- The db_berkeley module will attempt
-   natural joins when performing a query. This is basically a
+   Note on use_domain- The db_berkeley module will attempt natural
+   joins when performing a query. This is basically a
    lexigraphical string compare using the keys provided. In most
    places in the db_berkeley dbschema (unless you customize), the
    domainname is identified as a natural key. Consider an example
@@ -258,9 +257,9 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
    begin with 'METADATA'. Here is an example of table meta-data,
    taken from the table 'version'.
 
-   Note on reserved character- The '|' pipe character is used as
-   a record delimiter within the Berkeley DB implementation and
-   must not be present in any DB field.
+   Note on reserved character- The '|' pipe character is used as a
+   record delimiter within the Berkeley DB implementation and must
+   not be present in any DB field.
 
    Example 1.4. METADATA_COLUMNS
 METADATA_COLUMNS
@@ -281,8 +280,8 @@ METADATA_KEY
    delimited by the '|' pipe character. Since the code tokenizes
    on this delimiter, it is important that this character not
    appear in any valid data field. The following is the output of
-   the 'db_berkeley.sh dump version' command. It shows contents
-   of table 'version' in plain text.
+   the 'db_berkeley.sh dump version' command. It shows contents of
+   table 'version' in plain text.
 
    Example 1.5. contents of version table
 VERSION=3
@@ -343,10 +342,9 @@ DATA=END
 
    Example 1.6. METADATA_COLUMNS
 METADATA_COLUMNS
-username(str) domain(str) password(str) ha1(str) ha1b(str) first_name(s
-tr) last_name(str) email_address(str) datetime_created(datetime) timezo
-ne(str) rpid(str)
-
+username(str) domain(str) password(str) ha1(str) ha1b(str) first_name(st
+r) last_name(str) email_address(str) datetime_created(datetime) timezone
+(str) rpid(str)
 
    Related (hardcoded) limitations:
      * maximum of 32 columns per table.
@@ -358,19 +356,18 @@ ne(str) rpid(str)
 
 1.9. METADATA_KEYS (required)
 
-   The METADATA_KEYS row indicates the indexes of the key
-   columns, with respect to the order specified in
-   METADATA_COLUMNS. Here is an example taken from table
-   subscriber that brings up a good point:
+   The METADATA_KEYS row indicates the indexes of the key columns,
+   with respect to the order specified in METADATA_COLUMNS. Here
+   is an example taken from table subscriber that brings up a good
+   point:
 
    Example 1.7. METADATA_KEYS
  METADATA_KEY
  0 1
 
-
-   The point is that both the username and domain name are
-   require as the key to this record. Thus, usrloc modparam
-   use_domain = 1 must be set for this to work.
+   The point is that both the username and domain name are require
+   as the key to this record. Thus, usrloc modparam use_domain = 1
+   must be set for this to work.
 
 1.10. METADATA_READONLY (optional)
 
@@ -383,9 +380,9 @@ ne(str) rpid(str)
 1.11. METADATA_LOGFLAGS (optional)
 
    The METADATA_LOGFLAGS row contains a bitfield that customizes
-   the journaling on a per table basis. If not present the
-   default value is taken as 0. Here are the masks so far (taken
-   from bdb_lib.h):
+   the journaling on a per table basis. If not present the default
+   value is taken as 0. Here are the masks so far (taken from
+   bdb_lib.h):
 
    Example 1.8. METADATA_LOGFLAGS
 #define JLOG_NONE 0
@@ -396,15 +393,15 @@ ne(str) rpid(str)
 #define JLOG_SYSLOG 16
 
    This means that if you want to journal INSERTS to local file
-   and syslog the value should be set to 1+16=17. Or if you do
-   not want to journal at all, set this to 0.
+   and syslog the value should be set to 1+16=17. Or if you do not
+   want to journal at all, set this to 0.
 
 1.12. DB Maintaince Script : kamdbctl
 
    Use the kamdbctl script for maintaining Kamailio Berkeley DB
-   tables. This script assumes you have DBENGINE and DB_PATH
-   setup correctly in kamctlrc. Note Unsupported commands are-
-   backup, restore, migrate, copy, serweb.
+   tables. This script assumes you have DBENGINE and DB_PATH setup
+   correctly in kamctlrc. Note Unsupported commands are- backup,
+   restore, migrate, copy, serweb.
 
    Example 1.9. kamdbctl
 usage: kamdbctl create
@@ -412,22 +409,22 @@ usage: kamdbctl create
        kamdbctl extra
        kamdbctl drop
        kamdbctl reinit
-       kamdbctl bdb list         (lists the underlying db files in DB_P
-ATH)
-       kamdbctl bdb cat       db (prints the contents of db file to STD
-OUT in plain-text)
-       kamdbctl bdb swap      db (installs db.new by db -> db.old; db.n
-ew -> db)
-       kamdbctl bdb append    db datafile (appends data to a new instan
-ce of db; output DB_PATH/db.new)
-       kamdbctl bdb newappend db datafile (appends data to a new instan
-ce of db; output DB_PATH/db.new)
+       kamdbctl bdb list         (lists the underlying db files in DB_PA
+TH)
+       kamdbctl bdb cat       db (prints the contents of db file to STDO
+UT in plain-text)
+       kamdbctl bdb swap      db (installs db.new by db -> db.old; db.ne
+w -> db)
+       kamdbctl bdb append    db datafile (appends data to a new instanc
+e of db; output DB_PATH/db.new)
+       kamdbctl bdb newappend db datafile (appends data to a new instanc
+e of db; output DB_PATH/db.new)
 
 1.13. DB Recovery : kambdb_recover
 
    The db_berkeley module uses the Concurrent Data Store (CDS)
-   architecture. As such, no transaction or journaling is
-   provided by the DB natively. The application kambdb_recover is
+   architecture. As such, no transaction or journaling is provided
+   by the DB natively. The application kambdb_recover is
    specifically written to recover data from journal files that
    Kamailio creates. The kambdb_recover application requires an
    additional text file that contains the table schema.
@@ -455,22 +452,22 @@ usage: ./kambdb_recover -s schemadir [-h home] [-C all]
         This will create all the core tables, each with metadata.
 
 usage: ./kambdb_recover -s schemadir [-h home] [-r journal-file]
-        This will rebuild a DB and populate it with operation from jour
-nal-file.
-        The table name is embedded in the journal-file name by conventi
-on.
+        This will rebuild a DB and populate it with operation from journ
+al-file.
+        The table name is embedded in the journal-file name by conventio
+n.
 
 usage: ./kambdb_recover -s schemadir [-h home] [-R lastN]
-        This will iterate over all core tables enumerated. If journal f
-iles exist in 'home',
+        This will iterate over all core tables enumerated. If journal fi
+les exist in 'home',
         a new DB file will be created and populated with the data found
- in the last N files.
-        The files are 'replayed' in chronological order (oldest to newe
-st). This
+in the last N files.
+        The files are 'replayed' in chronological order (oldest to newes
+t). This
         allows the administrator to rebuild the db with a subset of all
- possible
-        operations if needed. For example, you may only be interested i
-n
+possible
+        operations if needed. For example, you may only be interested in
+
         the last hours data in table location.
 
    Important note- A corrupted DB file must be moved out of the
@@ -480,5 +477,5 @@ n
 
    The Berkeley DB does not nativly support an autoincrement (or
    sequence) mechanism. Consequently, this version does not
-   support surragate keys in dbschema. These are the id columns
-   in the tables.
+   support surragate keys in dbschema. These are the id columns in
+   the tables.