Browse Source

More backends refactoring (#71)

- Refactored backend system (default Gateway backend uses a decorator pattern)
- New API
- Update README, see the README for changes to the config & API details, info about the new default Gateway, etc.
Flashmob 8 years ago
parent
commit
14a83f6bd2

+ 2 - 0
.gitignore

@@ -1,4 +1,6 @@
 .idea
 .idea
 goguerrilla.conf
 goguerrilla.conf
+goguerrilla.conf.json
 /guerrillad
 /guerrillad
 vendor
 vendor
+go-guerrilla.wiki

+ 1 - 4
.travis.yml

@@ -15,7 +15,4 @@ install:
 script:
 script:
   - ./.travis.gofmt.sh
   - ./.travis.gofmt.sh
   - make guerrillad
   - make guerrillad
-  - go test ./tests
-  - go test
-  - go test ./cmd/guerrillad
-  - go test ./response
+  - make test

+ 9 - 0
Makefile

@@ -29,3 +29,12 @@ test: *.go */*.go */*/*.go
 	$(GO_VARS) $(GO) test -v ./tests
 	$(GO_VARS) $(GO) test -v ./tests
 	$(GO_VARS) $(GO) test -v ./cmd/guerrillad
 	$(GO_VARS) $(GO) test -v ./cmd/guerrillad
 	$(GO_VARS) $(GO) test -v ./response
 	$(GO_VARS) $(GO) test -v ./response
+	$(GO_VARS) $(GO) test -v ./backends
+	$(GO_VARS) $(GO) test -v ./mail
+
+testrace: *.go */*.go */*/*.go
+	$(GO_VARS) $(GO) test -v . -race
+	$(GO_VARS) $(GO) test -v ./tests -race
+	$(GO_VARS) $(GO) test -v ./cmd/guerrillad -race
+	$(GO_VARS) $(GO) test -v ./response -race
+	$(GO_VARS) $(GO) test -v ./backends -race

+ 195 - 347
README.md

@@ -1,106 +1,75 @@
 
 
 [![Build Status](https://travis-ci.org/flashmob/go-guerrilla.svg?branch=master)](https://travis-ci.org/flashmob/go-guerrilla)
 [![Build Status](https://travis-ci.org/flashmob/go-guerrilla.svg?branch=master)](https://travis-ci.org/flashmob/go-guerrilla)
 
 
-Go-Guerrilla SMTPd
+Go-Guerrilla SMTP Daemon
 ====================
 ====================
 
 
-An minimalist SMTP server written in Go, made for receiving large volumes of mail.
+A lightweight SMTP server written in Go, made for receiving large volumes of mail.
+To be used as a package in your Go project, or as a stand-alone daemon by running the "guerrillad" binary.
+
+Supports MySQL and Redis out-of-the-box, with many other vendor provided _processors_,
+such as [MailDir](https://github.com/flashmob/maildir-processor) and even [FastCGI](https://github.com/flashmob/fastcgi-processor)! 
+See below for a list of available processors.
 
 
 ![Go Guerrilla](/GoGuerrilla.png)
 ![Go Guerrilla](/GoGuerrilla.png)
 
 
-### What is Go Guerrilla SMTPd?
+### What is Go-Guerrilla?
+
+It's an SMTP server written in Go, for the purpose of receiving large volumes of email.
+It started as a project for GuerrillaMail.com which processes millions of emails every day,
+and needed a daemon with less bloat & written in a more memory-safe language that can 
+take advantage of modern multi-core architectures.
 
 
-It's a small SMTP server written in Go, for the purpose of receiving large volume of email.
-Written for GuerrillaMail.com which processes hundreds of thousands of emails
-every hour.
+The purpose of this daemon is to grab the email, save it,
+and disconnect as quickly as possible, essentially performing the services of a
+Mail Transfer Agent (MTA) without the sending functionality.
 
 
-The purpose of this daemon is to grab the email, save it to the database
-and disconnect as quickly as possible.
+The software also includes a modular backend implementation, which can extend the email
+processing functionality to whatever needs you may require. We refer to these modules as 
+"_Processors_". Processors can be chained via the config to perform different tasks on 
+received email, or to validate recipients.
 
 
-A typical user of this software would probably want to look into 
-`backends/guerrilla_db_redis.go` source file to use as an example to 
-customize for their own systems.
+See the list of available _Processors_ below.
 
 
-This server does not attempt to filter HTML, check for spam or do any 
-sender verification. These steps should be performed by other programs,
- (or perhaps your own custom backend?).
-The server does not send any email including bounces.
+For more details about the backend system, see the:
+[Backends, configuring and extending](https://github.com/flashmob/go-guerrilla/wiki/Backends,-configuring-and-extending) page.
+
+### License
 
 
 The software is using MIT License (MIT) - contributors welcome.
 The software is using MIT License (MIT) - contributors welcome.
 
 
-### Roadmap / Contributing & Bounties
+### Features
+
+#### Main Features
+
+- Multi-server. Can spawn multiple servers, all sharing the same backend
+for saving email.
+- Config hot-reloading. Add/Remove/Enable/Disable servers without restarting. 
+Reload TLS configuration, change most other settings on the fly.
+- Graceful shutdown: Minimise loss of email if you need to shutdown/restart.
+- Be a gentleman to the garbage collector: resources are pooled & recycled where possible.
+- Modular [Backend system](https://github.com/flashmob/go-guerrilla/wiki/Backends,-configuring-and-extending) 
+- Modern TLS support (STARTTLS or SMTPS).
+- Can be [used as a package](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package) in your Go project. 
+Get started in just a few lines of code!
+- [Fuzz tested](https://github.com/flashmob/go-guerrilla/wiki/Fuzz-testing). 
+[Auto-tested](https://travis-ci.org/flashmob/go-guerrilla). Battle Tested.
+
+#### Backend Features
 
 
+- Arranged as workers running in parallel, using a producer/consumer type structure, 
+ taking advantage of Go's channels and go-routines. 
+- Modular [backend system](https://github.com/flashmob/go-guerrilla/wiki/Backends,-configuring-and-extending)
+ structured using a [decorator-like pattern](https://en.wikipedia.org/wiki/Decorator_pattern) which allows the chaining of components (a.k.a. _Processors_) via the config.  
+- Different ways for processing / delivering email: Supports MySQL and Redis out-of-the box, many other 
+vendor provided processors available.
+
+### Roadmap / Contributing & Bounties
 
 
 Pull requests / issue reporting & discussion / code reviews always 
 Pull requests / issue reporting & discussion / code reviews always 
-welcome. To encourage more pull requests, we are now offering bounties 
-funded from our bitcoin donation address:
-
-`1grr11aWtbsyMUeB4EGfHvTuu7eFzkJ4A`
-
-So far we have the following bounties are still open:
-(Updated 24 Feb 2017)
-
-* Let's encrypt TLS certificate support! 
-Status: Open.
-Take a look at https://github.com/flashmob/go-guerrilla/issues/29
-(0.5 for a successful merge)
-
-* Analytics: A web based admin panel that displays live statistics.
-Status: Currently WIP, see branch https://github.com/flashmob/go-guerrilla/tree/dashboard
-Include the number of clients, memory usage, graph the number of
-connections/bytes/memory used for the last 24h.
-Show the top source clients by: IP, by domain & by HELO message.
-(1 BTC for a successful merge)
-
-* Fuzz Testing: Using https://github.com/dvyukov/go-fuzz
-Status: Completed, see result: https://github.com/flashmob/go-guerrilla/wiki/Fuzz-testing
-Implement a fuzzing client that will send input to the
-server's connection. 
-(0.25 BTC has been awarded)
-
-* Testing: Add some automated more tests to increase coverage.
-Staus: Open .(0.1 BTC for a successful merge, judged to be a satisfactory increase
-in coverage. Please open an issue before to discuss scope.
-Already awarded once)
-
-* Profiling: Simulate a configurable number of simultaneous clients 
-Status: Open. Send commands at random speeds with messages of various 
-lengths. Some connections to use TLS. Some connections may produce 
-errors, eg. disconnect randomly after a few commands, issue unexpected
-input or timeout. Provide a report of all the bottlenecks and setup so 
-that the report can be run automatically run when code is pushed to 
-github. (Flame graph maybe? https://github.com/uber/go-torch 
-Please open an issue before to discuss scope)
-(0.25 BTC)
-
-* Code review & possibly fix any tidbits.
-Status: Open.
-Submit a pull request with fixes, or suggestions for doing things better.
-(Already one bounty of 0.25 paid, however, more is always welcome)
-
-Ready to roll up your sleeves and have a go?
-Please open an issue for more clarification / details on Github.
-Also, welcome your suggestions for adding things to this Roadmap - please open an issue.
-
-Another way to contribute is to donate to our bitcoin address to help
-us fund more bounties!
-`1grr11aWtbsyMUeB4EGfHvTuu7eFzkJ4A`
-
-### Brief History and purpose
-
-Go-Guerrilla is used as the primary server for receiving email at
-Guerrilla Mail. As of 2016, it's handling all connections without any
-proxy (Nginx).
-
-Originally, Guerrilla Mail ran Exim which piped email to a php script (2009).
-As as the site got popular and more email came through, this approach
-eventually swamped the server.
-
-The next solution was to decrease the heavy setup into something more
-lightweight. A small script was written to implement a basic SMTP server (2010).
-Eventually that script also got swamped, so it was re-written to use
-event driven I/O (2012). A year later, the latest script also became inadequate
- so it was ported to Go and has served us well since.
+welcome. To encourage more pull requests, we are now offering bounties. 
+
+Take a look at our [Bounties and Roadmap](https://github.com/flashmob/go-guerrilla/wiki/Roadmap-and-Bounties) page!
 
 
 
 
 Getting started
 Getting started
@@ -108,322 +77,199 @@ Getting started
 
 
 (Assuming that you have GNU make and latest Go on your system)
 (Assuming that you have GNU make and latest Go on your system)
 
 
-To build, just run
+#### Dependencies
+
+Go-Guerrilla uses [Glide](https://github.com/Masterminds/glide) to manage 
+dependencies. If you have glide installed, just run `glide install` as usual.
+ 
+You can also run `$ go get ./..` if you don't want to use glide, and then run `$ make test`
+to ensure all is good.
+
+To build the binary run:
 
 
 ```
 ```
 $ make guerrillad
 $ make guerrillad
 ```
 ```
 
 
-Rename goguerrilla.conf.sample to goguerrilla.conf
-
-See `backends/guerrilla_db_redis.go` source to use an example for creating your own email saving backend, 
-or the dummy one if you'd like to start from scratch.
-
-If you want to build on the sample `guerrilla-db-redis` module, setup the following table
-in MySQL:
-
-	CREATE TABLE IF NOT EXISTS `new_mail` (
-	  `mail_id` BIGINT(20) unsigned NOT NULL AUTO_INCREMENT,
-	  `date` datetime NOT NULL,
-	  `from` varchar(128) character set latin1 NOT NULL,
-	  `to` varchar(128) character set latin1 NOT NULL,
-	  `subject` varchar(255) NOT NULL,
-	  `body` text NOT NULL,
-	  `charset` varchar(32) character set latin1 NOT NULL,
-	  `mail` longblob NOT NULL,
-	  `spam_score` float NOT NULL,
-	  `hash` char(32) character set latin1 NOT NULL,
-	  `content_type` varchar(64) character set latin1 NOT NULL,
-	  `recipient` varchar(128) character set latin1 NOT NULL,
-	  `has_attach` int(11) NOT NULL,
-	  `ip_addr` varchar(15) NOT NULL,
-	  `return_path` VARCHAR(255) NOT NULL,
-	  `is_tls` BIT(1) DEFAULT b'0' NOT NULL,
-	  PRIMARY KEY  (`mail_id`),
-	  KEY `to` (`to`),
-	  KEY `hash` (`hash`),
-	  KEY `date` (`date`)
-	) ENGINE=InnoDB  DEFAULT CHARSET=utf8
-
-The above table does not store the body of the email which makes it quick
-to query and join, while the body of the email is fetched from Redis
-for future processing. The `mail` field can contain data in case Redis is down.
-Otherwise, if data is in Redis, the `mail` will be blank, and
-the `body` field will contain the word 'redis'.
-
-You can implement your own saveMail function to use whatever storage /
-backend fits for you. Please share them ^_^, in particular, we would 
-like to see other formats such as maildir and mbox.
+This will create a executable file named `guerrillad` that's ready to run.
 
 
+Next, copy the `goguerrilla.conf.sample` file to `goguerrilla.conf.json`. 
+You may need to customize the `pid_file` setting to somewhere local, 
+and also set `tls_always_on` to false if you don't have a valid certificate setup yet. 
 
 
-Use as a package
-============================
-Guerrilla SMTPd can also be imported and used as a package in your project.
-
-## Import Guerrilla.
-```go
-import "github.com/flashmob/go-guerrilla"
-```
+Next, run your server like this:
 
 
-## Implement the `Backend` interface
-Or use one of the implementations in the `backends` sub-package). This is how
-your application processes emails received by the Guerrilla app.
-```go
-type CustomBackend struct {...}
-
-func (cb *CustomBackend) Process(c *guerrilla.Envelope) guerrilla.BackendResult {
-  err := saveSomewhere(c.Data)
-  if err != nil {
-    return guerrilla.NewBackendResult(fmt.Sprintf("554 Error: %s", err.Error()))
-  }
-  return guerrilla.NewBackendResult("250 OK")
-}
-```
+`$ ./guerrillad serve`
 
 
-## Create an app instance.
-See Configuration section below for setting configuration options.
-```go
-config := &guerrilla.AppConfig{
-  Servers: []*guerrilla.ServerConfig{...},
-  AllowedHosts: []string{...}
-}
-backend := &CustomBackend{...}
-app, err := guerrilla.New(config, backend)
-```
+The configuration options are detailed on the [configuration page](https://github.com/flashmob/go-guerrilla/wiki/Configuration). 
+The main takeaway here is:
 
 
-## Start the app.
-`Start` is non-blocking, so make sure the main goroutine is kept busy
-```go
-startErrors := app.Start()
-```
+The default configuration uses 3 _processors_, they are set using the `save_process` 
+config option. Notice that it contains the following value: 
+`"HeadersParser|Header|Debugger"` - this means, once an email is received, it will
+first go through the `HeadersParser` processor where headers will be parsed.
+Next, it will go through the `Header` processor, where delivery headers will be added.
+Finally, it will finish at the `Debugger` which will log some debug messages.
 
 
-## Shutting down.
-`Shutdown` will do a graceful shutdown, close all the connections, close
- the ports, and gracefully shutdown the backend. It will block until all
-  operations are complete.
- 
-```go
-app.Shutdown()
-```
+Where to go next?
 
 
-Configuration
-============================================
-The configuration is in strict JSON format. Here is an annotated configuration.
-Copy goguerrilla.conf.sample to goguerrilla.conf
-
-
-    {
-        "allowed_hosts": ["guerrillamail.com","guerrillamailblock.com","sharklasers.com","guerrillamail.net","guerrillamail.org"], // What hosts to accept
-        "pid_file" : "/var/run/go-guerrilla.pid", // pid = process id, so that other programs can send signals to our server
-        "log_file" : "stderr", // can be "off", "stderr", "stdout" or any path to a file
-        "log_level" : "info", // can be  "debug", "info", "error", "warn", "fatal", "panic"
-        "backend_name": "guerrilla-db-redis", // what backend to use for saving email. See /backends dir
-        "backend_config" :
-            {
-                "mysql_db":"gmail_mail",
-                "mysql_host":"127.0.0.1:3306",
-                "mysql_pass":"ok",
-                "mysql_user":"root",
-                "mail_table":"new_mail",
-                "redis_interface" : "127.0.0.1:6379",
-                "redis_expire_seconds" : 7200,
-                "save_workers_size" : 3,
-                "primary_mail_host":"sharklasers.com"
-            },
-        "servers" : [ // the following is an array of objects, each object represents a new server that will be spawned
-            {
-                "is_enabled" : true, // boolean
-                "host_name":"mail.test.com", // the hostname of the server as set by MX record
-                "max_size": 1000000, // maximum size of an email in bytes
-                "private_key_file":"/path/to/pem/file/test.com.key",  // full path to pem file private key
-                "public_key_file":"/path/to/pem/file/test.com.crt", // full path to pem file certificate
-                "timeout":180, // timeout in number of seconds before an idle connection is closed
-                "listen_interface":"127.0.0.1:25", // listen on ip and port
-                "start_tls_on":true, // supports the STARTTLS command?
-                "tls_always_on":false, // always connect using TLS? If true, start_tls_on will be false
-                "max_clients": 1000, // max clients at one time
-                "log_file":"/dev/stdout" // optional. Can be "off", "stderr", "stdout" or any path to a file. Will use global setting of empty.
-            },
-            // the following is a second server, but listening on port 465 and always using TLS
-            {
-                "is_enabled" : true,
-                "host_name":"mail.test.com",
-                "max_size":1000000,
-                "private_key_file":"/path/to/pem/file/test.com.key",
-                "public_key_file":"/path/to/pem/file/test.com.crt",
-                "timeout":180,
-                "listen_interface":"127.0.0.1:465",
-                "start_tls_on":false,
-                "tls_always_on":true,
-                "max_clients":500
-            }
-            // repeat as many servers as you need
-        ]
-    }
-    }
-
-The Json parser is very strict on syntax. If there's a parse error and it
-doesn't give much clue, then test your syntax here:
-http://jsonlint.com/#
-
-Email Saving Backends
-=====================
+- Try setting up an [example configuration](https://github.com/flashmob/go-guerrilla/wiki/Configuration-example:-save-to-Redis-&-MySQL) 
+which saves email bodies to Redis and metadata to MySQL.
+- Try importing some of the 'vendored' processors into your project. See [MailDiranasaurus](https://github.com/flashmob/maildiranasaurus)
+as an example project which imports the [MailDir](https://github.com/flashmob/maildir-processor) and [FastCGI](https://github.com/flashmob/fastcgi-processor) processors.
+- Try hacking the source and [create your own processor](https://github.com/flashmob/go-guerrilla/wiki/Backends,-configuring-and-extending).
+- Once your daemon is running, you might want to stup [log rotation](https://github.com/flashmob/go-guerrilla/wiki/Automatic-log-file-management-with-logrotate).
 
 
-Backends provide for a modular way to save email and for the ability to
-extend this functionality. They can be swapped in or out via the config. 
-Currently, the server comes with two example backends: 
 
 
-- dummy : used for testing purposes
-- guerrilla_db_redis: example uses MySQL and Redis to store email, used on Guerrilla Mail
 
 
-Releases
-========
+Use as a package
+============================
+Go-Guerrilla can be imported and used as a package in your Go project.
 
 
-(Master branch - Release Candidate 1 for v1.6)
-Large refactoring of the code. 
-- Introduced "backends": modular architecture for saving email
-- Issue: Use as a package in your own projects! https://github.com/flashmob/go-guerrilla/issues/20
-- Issue: Do not include dot-suffix in emails https://github.com/flashmob/go-guerrilla/issues/24
-- Logging functionality: logrus is now used for logging. Currently output is going to stdout
-- Incompatible change: Config's allowed_hosts is now an array
-- Incompatible change: The server's command is now a command called `guerrillad`
-- Config re-loading via SIGHUP: reload TLS, add/remove/enable/disable servers, change allowed hosts, timeout.
-- Begin writing automated tests
- 
+### Quickstart
 
 
-1.5.1 - 4nd Nov 2016 (Latest tagged release)
-- Small optimizations to the way email is saved
 
 
-1.5 - 2nd Nov 2016
-- Fixed a DoS vulnerability, stop reading after an input limit is reached
-- Fixed syntax error in Json goguerrilla.conf.sample
-- Do not load certificates if SSL is not enabled
-- check database back-end connections before starting
+#### 1. Import the guerrilla package
+```go
+import (
+    "github.com/flashmob/go-guerrilla/guerrilla"
+)
 
 
-1.4 - 25th Oct 2016
-- New Feature: multiple servers!
-- Changed the configuration file format to support multiple servers,
-this means that a new configuration file would need to be created form the
-sample (goguerrilla.conf.sample)
-- Organised code into separate files. Config is now strongly typed, etc
-- Deprecated nginx proxy support
 
 
+```
 
 
-1.3 14th July 2016
-- Number of saveMail workers added to config (GM_SAVE_WORKERS)
-- convenience function for reading int values form config
-- advertise PIPELINING
-- added HELP command
-- rcpt to host validation: now case insensitive and done earlier (after DATA)
-- iconv switched to: go get gopkg.in/iconv.v1
+You may use ``$ go get ./...`` to get all dependencies, also Go-Guerrilla uses 
+[glide](https://github.com/Masterminds/glide) for dependency management.
 
 
-1.2 1st July 2016
-- Reload config on SIGHUP
-- Write current process id (pid) to a file, /var/run/go-guerrilla.pid by default
+#### 2. Start a server
 
 
+This will start a server with the default settings, listening on `127.0.0.1:2525`
 
 
-Using Nginx as a proxy
-======================
-Note: This release temporarily does not have proxy support.
-An issue has been opened to put back in https://github.com/flashmob/go-guerrilla/issues/30
-Nginx can be used to proxy SMTP traffic for GoGuerrilla SMTPd
 
 
-Why proxy SMTP with Nginx?
+```go
 
 
- *	Terminate TLS connections: (eg. Early Golang versions were not there yet when it came to TLS.)
- OpenSSL on the other hand, used in Nginx, has a complete implementation of TLS with familiar configuration.
- *	Nginx could be used for load balancing and authentication
+d := guerrilla.Daemon{}
+err := d.Start()
 
 
- 1.	Compile nginx with --with-mail --with-mail_ssl_module (most current nginx packages have this compiled already)
+if err == nil {
+    fmt.Println("Server Started!")
+}
+```
 
 
- 2.	Configuration:
+`d.Start()` *does not block* after the server has been started, so make sure that you keep your program busy.
+
+The defaults are: 
+* Server listening to 127.0.0.1:2525
+* use your hostname to determine your which hosts to accept email for
+* 100 maximum clients
+* 10MB max message size 
+* log to Stderror, 
+* log level set to "`debug`"
+* timeout to 30 sec 
+* Backend configured with the following processors: `HeadersParser|Header|Debugger` where it will log the received emails.
+
+Next, you may want to [change the interface](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#starting-a-server---custom-listening-interface) (`127.0.0.1:2525`) to the one of your own choice.
+
+#### API Documentation topics
+
+Please continue to the [API documentation](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package) for the following topics:
+
+
+- [Suppressing log output](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#starting-a-server---suppressing-log-output)
+- [Custom listening interface](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#starting-a-server---custom-listening-interface)
+- [What else can be configured](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#what-else-can-be-configured)
+- [Backends](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#backends)
+    - [About the backend system](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#about-the-backend-system)
+    - [Backend Configuration](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#backend-configuration)
+    - [Registering a Processor](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#registering-a-processor)
+- [Loading config from JSON](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#loading-config-from-json)
+- [Config hot-reloading](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#config-hot-reloading)
+- [Logging](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#logging-stuff)
+- [Log re-opening](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#log-re-opening)
+- [Graceful shutdown](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#graceful-shutdown)
+- [Pub/Sub](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#pubsub)
+- [More Examples](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#more-examples)
+
+Use as a Daemon
+==========================================================
 
 
+### Manual for using from the command line
 
 
-		mail {
-	        server {
-	                listen  15.29.8.163:25;
-	                protocol smtp;
-	                server_name  ak47.example.com;
-	                auth_http smtpauth.local:80/auth.txt;
-	                smtp_auth none;
-	                timeout 30000;
-	                smtp_capabilities "SIZE 15728640";
+- [guerrillad command](https://github.com/flashmob/go-guerrilla/wiki/Running-from-command-line#guerrillad-command)
+    - [Starting](https://github.com/flashmob/go-guerrilla/wiki/Running-from-command-line#starting)
+    - [Re-loading configuration](https://github.com/flashmob/go-guerrilla/wiki/Running-from-command-line#re-loading-the-config)
+    - [Re-open logs](https://github.com/flashmob/go-guerrilla/wiki/Running-from-command-line#re-open-log-file)
+    - [Examples](https://github.com/flashmob/go-guerrilla/wiki/Running-from-command-line#examples)
 
 
-	                # ssl default off. Leave off if starttls is on
-	                #ssl                  on;
-	                ssl_certificate      /etc/ssl/certs/ssl-cert-snakeoil.pem;
-	                ssl_certificate_key  /etc/ssl/private/ssl-cert-snakeoil.key;
-	                ssl_session_timeout  5m;
-	                # See https://mozilla.github.io/server-side-tls/ssl-config-generator/ Intermediate settings
-	                ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
-	                ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
-	                ssl_prefer_server_ciphers on;
-	                # TLS off unless client issues STARTTLS command
-	                starttls on;
-	                proxy on;
-	        }
-		}
+### Other topics
 
 
-		http {
+- [Using Nginx as a proxy](https://github.com/flashmob/go-guerrilla/wiki/Using-Nginx-as-a-proxy)
+- [Testing STARTTLS](https://github.com/flashmob/go-guerrilla/wiki/Running-from-command-line#testing-starttls)
+- [Benchmarking](https://github.com/flashmob/go-guerrilla/wiki/Profiling#benchmarking)
 
 
-		    # Add somewhere inside your http block..
-		    # make sure that you have added smtpauth.local to /etc/hosts
-		    # What this block does is tell the above stmp server to connect
-		    # to our golang server configured to run on 127.0.0.1:2525
 
 
-		    server {
-                    listen 15.29.8.163:80;
-                    server_name 15.29.8.163 smtpauth.local;
-                    root /home/user/http/auth/;
-                    access_log off;
-                    location /auth.txt {
-                        add_header Auth-Status OK;
-                        # where to find your smtp server?
-                        add_header Auth-Server 127.0.0.1;
-                        add_header Auth-Port 2525;
-                    }
+Email Processing Backend
+=====================
 
 
-                }
+The main job of a Go-Guerrilla backend is to validate recipients and deliver emails. The term
+"delivery" is often synonymous with saving email to secondary storage.
 
 
-		}
+The default backend implementation manages multiple workers. These workers are composed of 
+smaller components called "Processors" which are chained using the config to perform a series of steps.
+Each processor specifies a distinct feature of behaviour. For example, a processor may save
+the emails to a particular storage system such as MySQL, or it may add additional headers before 
+passing the email to the next _processor_.
 
 
+To extend or add a new feature, one would write a new Processor, then add it to the config.
+There are a few default _processors_ to get you started.
 
 
 
 
+### Included Processors
 
 
-Starting / Command Line usage
-==========================================================
+| Processor | Description |
+|-----------|-------------|
+|Compressor|Sets a zlib compressor that other processors can use later|
+|Debugger|Logs the email envelope to help with testing|
+|Hasher|Processes each envelope to produce unique hashes to be used for ids later|
+|Header|Add a delivery header to the envelope|
+|HeadersParser|Parses MIME headers and also populates the Subject field of the envelope|
+|MySQL|Saves the emails to MySQL.|
+|Redis|Saves the email data to Redis.|
+|GuerrillaDbRedis|A 'monolithic' processor used at Guerrilla Mail; included for example
 
 
-All command line arguments are optional
+### Available Processors
 
 
-	-config="goguerrilla.conf": Path to the configuration file
-	 -if="": Interface and port to listen on, eg. 127.0.0.1:2525
-	 -v="n": Verbose, [y | n]
+The following processors can be imported to your project, then use the
+[Daemon.AddProcessor](https://github.com/flashmob/go-guerrilla/wiki/Using-as-a-package#registering-a-processor) function to register, then add to your config.
 
 
-Starting from the command line (example)
+| Processor | Description |
+|-----------|-------------|
+|[MailDir](https://github.com/flashmob/maildir-processor)|Save emails to a maildir. [MailDiranasaurus](https://github.com/flashmob/maildiranasaurus) is an example project|
+|[FastCGI](https://github.com/flashmob/fastcgi-processor)|Deliver email directly to PHP-FPM or a similar FastCGI backend.
 
 
-	/usr/bin/nohup /home/mike/goguerrilla -config=/home/mike/goguerrilla.conf 2>&1 &
+Have a processor that you would like to share? Submit a PR to add it to the list!
 
 
-This will place goguerrilla in the background and continue running
+Releases
+========
 
 
-You may also put another process to watch your goguerrilla process and re-start it
-if something goes wrong.
+Current release: 1.5.1 - 4th Nov 2016
 
 
-Testing STARTTLS
+Next Planned release: 2.0.0 - TBA
 
 
-Use openssl:
+See our [change log](https://github.com/flashmob/go-guerrilla/wiki/Change-Log) for change and release history
 
 
-    $ openssl s_client -starttls smtp -crlf -connect 127.0.0.1:2526
 
 
+Using Nginx as a proxy
+======================
 
 
-Benchmarking:
-==========================================================
+For such purposes as load balancing, terminating TLS early,
+ or supporting SSL versions not supported by Go (highly not recommenced if you
+ want to use older SSL versions), 
+ it is possible to [use NGINX as a proxy](https://github.com/flashmob/go-guerrilla/wiki/Using-Nginx-as-a-proxy).
 
 
-https://web.archive.org/web/20110725141905/http://www.jrh.org/smtp/index.html
 
 
-Test 500 clients:
-$ time smtp-source -c -l 5000 -t [email protected] -s 500 -m 5000 5.9.7.183
 
 
-Authors
+Credits
 =======
 =======
 
 
 Project Lead: 
 Project Lead: 
@@ -435,10 +281,12 @@ Major Contributors:
 
 
 * Reza Mohammadi https://github.com/remohammadi
 * Reza Mohammadi https://github.com/remohammadi
 * Jordan Schalm https://github.com/jordanschalm 
 * Jordan Schalm https://github.com/jordanschalm 
+* Philipp Resch https://github.com/dapaxx
 
 
 Thanks to:
 Thanks to:
 ----------
 ----------
 * https://github.com/dvcrn
 * https://github.com/dvcrn
 * https://github.com/athoune
 * https://github.com/athoune
+* https://github.com/Xeoncross
 
 
 ... and anyone else who opened an issue / sent a PR / gave suggestions!
 ... and anyone else who opened an issue / sent a PR / gave suggestions!

+ 243 - 0
api.go

@@ -0,0 +1,243 @@
+package guerrilla
+
+import (
+	"encoding/json"
+	"errors"
+	"fmt"
+	"github.com/flashmob/go-guerrilla/backends"
+	"github.com/flashmob/go-guerrilla/log"
+	"io/ioutil"
+	"time"
+)
+
+// Daemon provides a convenient API when using go-guerrilla as a package in your Go project.
+// Is's facade for Guerrilla, AppConfig, backends.Backend and log.Logger
+type Daemon struct {
+	Config  *AppConfig
+	Logger  log.Logger
+	Backend backends.Backend
+
+	// Guerrilla will be managed through the API
+	g Guerrilla
+
+	configLoadTime time.Time
+	subs           []deferredSub
+}
+
+type deferredSub struct {
+	topic Event
+	fn    interface{}
+}
+
+const defaultInterface = "127.0.0.1:2525"
+
+// AddProcessor adds a processor constructor to the backend.
+// name is the identifier to be used in the config. See backends docs for more info.
+func (d *Daemon) AddProcessor(name string, pc backends.ProcessorConstructor) {
+	backends.Svc.AddProcessor(name, pc)
+}
+
+// Starts the daemon, initializing d.Config, d.Logger and d.Backend with defaults
+// can only be called once through the lifetime of the program
+func (d *Daemon) Start() (err error) {
+	if d.g == nil {
+		if d.Config == nil {
+			d.Config = &AppConfig{}
+		}
+		if err = d.configureDefaults(); err != nil {
+			return err
+		}
+		if d.Logger == nil {
+			d.Logger, err = log.GetLogger(d.Config.LogFile, d.Config.LogLevel)
+			if err != nil {
+				return err
+			}
+		}
+		if d.Backend == nil {
+			d.Backend, err = backends.New(d.Config.BackendConfig, d.Logger)
+			if err != nil {
+				return err
+			}
+		}
+		d.g, err = New(d.Config, d.Backend, d.Logger)
+		if err != nil {
+			return err
+		}
+		for i := range d.subs {
+			d.Subscribe(d.subs[i].topic, d.subs[i].fn)
+
+		}
+		d.subs = make([]deferredSub, 0)
+	}
+	err = d.g.Start()
+	if err == nil {
+		if err := d.resetLogger(); err == nil {
+			d.Log().Infof("main log configured to %s", d.Config.LogFile)
+		}
+
+	}
+	return err
+}
+
+// Shuts down the daemon, including servers and backend.
+// Do not call Start on it again, use a new server.
+func (d *Daemon) Shutdown() {
+	if d.g != nil {
+		d.g.Shutdown()
+	}
+}
+
+// LoadConfig reads in the config from a JSON file.
+// Note: if d.Config is nil, the sets d.Config with the unmarshalled AppConfig which will be returned
+func (d *Daemon) LoadConfig(path string) (AppConfig, error) {
+	var ac AppConfig
+	data, err := ioutil.ReadFile(path)
+	if err != nil {
+		return ac, fmt.Errorf("Could not read config file: %s", err.Error())
+	}
+	err = ac.Load(data)
+	if err != nil {
+		return ac, err
+	}
+	if d.Config == nil {
+		d.Config = &ac
+	}
+	return ac, nil
+}
+
+// SetConfig is same as LoadConfig, except you can pass AppConfig directly
+// does not emit any change events, instead use ReloadConfig after daemon has started
+func (d *Daemon) SetConfig(c AppConfig) error {
+	// need to call c.Load, thus need to convert the config
+	// d.load takes json bytes, marshal it
+	data, err := json.Marshal(&c)
+	if err != nil {
+		return err
+	}
+	err = c.Load(data)
+	if err != nil {
+		return err
+	}
+	d.Config = &c
+	return nil
+}
+
+// Reload a config using the passed in AppConfig and emit config change events
+func (d *Daemon) ReloadConfig(c AppConfig) error {
+	oldConfig := *d.Config
+	err := d.SetConfig(c)
+	if err != nil {
+		d.Log().WithError(err).Error("Error while reloading config")
+		return err
+	} else {
+		d.Log().Infof("Configuration was reloaded at %s", d.configLoadTime)
+		d.Config.EmitChangeEvents(&oldConfig, d.g)
+	}
+	return nil
+}
+
+// Reload a config from a file and emit config change events
+func (d *Daemon) ReloadConfigFile(path string) error {
+	ac, err := d.LoadConfig(path)
+	if err != nil {
+		d.Log().WithError(err).Error("Error while reloading config from file")
+		return err
+	} else if d.Config != nil {
+		oldConfig := *d.Config
+		d.Config = &ac
+		d.Log().Infof("Configuration was reloaded at %s", d.configLoadTime)
+		d.Config.EmitChangeEvents(&oldConfig, d.g)
+	}
+	return nil
+}
+
+// ReopenLogs send events to re-opens all log files.
+// Typically, one would call this after rotating logs
+func (d *Daemon) ReopenLogs() error {
+	if d.Config == nil {
+		return errors.New("d.Config nil")
+	}
+	d.Config.EmitLogReopenEvents(d.g)
+	return nil
+}
+
+// Subscribe for subscribing to config change events
+func (d *Daemon) Subscribe(topic Event, fn interface{}) error {
+	if d.g == nil {
+		d.subs = append(d.subs, deferredSub{topic, fn})
+		return nil
+	}
+
+	return d.g.Subscribe(topic, fn)
+}
+
+// for publishing config change events
+func (d *Daemon) Publish(topic Event, args ...interface{}) {
+	if d.g == nil {
+		return
+	}
+	d.g.Publish(topic, args...)
+}
+
+// for unsubscribing from config change events
+func (d *Daemon) Unsubscribe(topic Event, handler interface{}) error {
+	if d.g == nil {
+		for i := range d.subs {
+			if d.subs[i].topic == topic && d.subs[i].fn == handler {
+				d.subs = append(d.subs[:i], d.subs[i+1:]...)
+			}
+		}
+		return nil
+	}
+	return d.g.Unsubscribe(topic, handler)
+}
+
+// log returns a logger that implements our log.Logger interface.
+// level is set to "info" by default
+func (d *Daemon) Log() log.Logger {
+	if d.Logger != nil {
+		return d.Logger
+	}
+	out := log.OutputStderr.String()
+	level := log.InfoLevel.String()
+	if d.Config != nil {
+		if len(d.Config.LogFile) > 0 {
+			out = d.Config.LogFile
+		}
+		if len(d.Config.LogLevel) > 0 {
+			level = d.Config.LogLevel
+		}
+	}
+	l, _ := log.GetLogger(out, level)
+	return l
+
+}
+
+// set the default values for the servers and backend config options
+func (d *Daemon) configureDefaults() error {
+	err := d.Config.setDefaults()
+	if err != nil {
+		return err
+	}
+	if d.Backend == nil {
+		err = d.Config.setBackendDefaults()
+		if err != nil {
+			return err
+		}
+	}
+	return err
+}
+
+// resetLogger sets the logger to the one specified in the config.
+// This is because at the start, the daemon may be logging to stderr,
+// then attaches to the logs once the config is loaded.
+// This will propagate down to the servers / backend too.
+func (d *Daemon) resetLogger() error {
+	l, err := log.GetLogger(d.Config.LogFile, d.Config.LogLevel)
+	if err != nil {
+		return err
+	}
+	d.Logger = l
+	d.g.SetLogger(d.Logger)
+	return nil
+}

+ 536 - 0
api_test.go

@@ -0,0 +1,536 @@
+package guerrilla
+
+import (
+	"bufio"
+	"fmt"
+	"github.com/flashmob/go-guerrilla/backends"
+	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
+	"io/ioutil"
+	"net"
+	"os"
+	"strings"
+	"testing"
+	"time"
+)
+
+// Test Starting smtp without setting up logger / backend
+func TestSMTP(t *testing.T) {
+	go func() {
+		select {
+		case <-time.After(time.Second * 40):
+			//buf := make([]byte, 1<<16)
+			//stackSize := runtime.Stack(buf, true)
+			//fmt.Printf("%s\n", string(buf[0:stackSize]))
+			//panic("timeout")
+			t.Error("timeout")
+			return
+
+		}
+	}()
+
+	d := Daemon{}
+	err := d.Start()
+
+	if err != nil {
+		t.Error(err)
+	}
+	// it should set to stderr automatically
+	if d.Config.LogFile != log.OutputStderr.String() {
+		t.Error("smtp.config.LogFile is not", log.OutputStderr.String())
+	}
+
+	if len(d.Config.AllowedHosts) == 0 {
+		t.Error("smtp.config.AllowedHosts len should be 1, not 0", d.Config.AllowedHosts)
+	}
+
+	if d.Config.LogLevel != "debug" {
+		t.Error("smtp.config.LogLevel expected'debug', it is", d.Config.LogLevel)
+	}
+	if len(d.Config.Servers) != 1 {
+		t.Error("len(smtp.config.Servers) should be 1, got", len(d.Config.Servers))
+	}
+	time.Sleep(time.Second * 2)
+	d.Shutdown()
+
+}
+
+// Suppressing log output
+func TestSMTPNoLog(t *testing.T) {
+
+	// configure a default server with no log output
+	cfg := &AppConfig{LogFile: log.OutputOff.String()}
+	d := Daemon{Config: cfg}
+
+	err := d.Start()
+	if err != nil {
+		t.Error(err)
+	}
+	time.Sleep(time.Second * 2)
+	d.Shutdown()
+}
+
+// our custom server
+func TestSMTPCustomServer(t *testing.T) {
+	cfg := &AppConfig{LogFile: log.OutputOff.String()}
+	sc := ServerConfig{
+		ListenInterface: "127.0.0.1:2526",
+		IsEnabled:       true,
+	}
+	cfg.Servers = append(cfg.Servers, sc)
+	d := Daemon{Config: cfg}
+
+	err := d.Start()
+	if err != nil {
+		t.Error("start error", err)
+	} else {
+		time.Sleep(time.Second * 2)
+		d.Shutdown()
+	}
+
+}
+
+// with a backend config
+func TestSMTPCustomBackend(t *testing.T) {
+	cfg := &AppConfig{LogFile: log.OutputOff.String()}
+	sc := ServerConfig{
+		ListenInterface: "127.0.0.1:2526",
+		IsEnabled:       true,
+	}
+	cfg.Servers = append(cfg.Servers, sc)
+	bcfg := backends.BackendConfig{
+		"save_workers_size":  3,
+		"save_process":       "HeadersParser|Header|Hasher|Debugger",
+		"log_received_mails": true,
+		"primary_mail_host":  "example.com",
+	}
+	cfg.BackendConfig = bcfg
+	d := Daemon{Config: cfg}
+
+	err := d.Start()
+	if err != nil {
+		t.Error("start error", err)
+	} else {
+		time.Sleep(time.Second * 2)
+		d.Shutdown()
+	}
+}
+
+// with a config from a json file
+func TestSMTPLoadFile(t *testing.T) {
+	json := `{
+    "log_file" : "./tests/testlog",
+    "log_level" : "debug",
+    "pid_file" : "tests/go-guerrilla.pid",
+    "allowed_hosts": ["spam4.me","grr.la"],
+    "backend_config" :
+        {
+            "log_received_mails" : true,
+            "save_process": "HeadersParser|Header|Hasher|Debugger",
+            "save_workers_size":  3
+        },
+    "servers" : [
+        {
+            "is_enabled" : true,
+            "host_name":"mail.guerrillamail.com",
+            "max_size": 100017,
+            "private_key_file":"config_test.go",
+            "public_key_file":"config_test.go",
+            "timeout":160,
+            "listen_interface":"127.0.0.1:2526",
+            "start_tls_on":false,
+            "tls_always_on":false,
+            "max_clients": 2
+        }
+    ]
+}
+
+	`
+	json2 := `{
+    "log_file" : "./tests/testlog2",
+    "log_level" : "debug",
+    "pid_file" : "tests/go-guerrilla2.pid",
+    "allowed_hosts": ["spam4.me","grr.la"],
+    "backend_config" :
+        {
+            "log_received_mails" : true,
+            "save_process": "HeadersParser|Header|Hasher|Debugger",
+            "save_workers_size":  3
+        },
+    "servers" : [
+        {
+            "is_enabled" : true,
+            "host_name":"mail.guerrillamail.com",
+            "max_size": 100017,
+            "private_key_file":"config_test.go",
+            "public_key_file":"config_test.go",
+            "timeout":160,
+            "listen_interface":"127.0.0.1:2526",
+            "start_tls_on":false,
+            "tls_always_on":false,
+            "max_clients": 2
+        }
+    ]
+}
+
+	`
+	err := ioutil.WriteFile("goguerrilla.conf.api", []byte(json), 0644)
+	if err != nil {
+		t.Error("could not write guerrilla.conf.api", err)
+		return
+	}
+
+	d := Daemon{}
+	_, err = d.LoadConfig("goguerrilla.conf.api")
+	if err != nil {
+		t.Error("ReadConfig error", err)
+		return
+	}
+
+	err = d.Start()
+	if err != nil {
+		t.Error("start error", err)
+		return
+	} else {
+		time.Sleep(time.Second * 2)
+		if d.Config.LogFile != "./tests/testlog" {
+			t.Error("d.Config.LogFile != \"./tests/testlog\"")
+		}
+
+		if d.Config.PidFile != "tests/go-guerrilla.pid" {
+			t.Error("d.Config.LogFile != tests/go-guerrilla.pid")
+		}
+
+		err := ioutil.WriteFile("goguerrilla.conf.api", []byte(json2), 0644)
+		if err != nil {
+			t.Error("could not write guerrilla.conf.api", err)
+			return
+		}
+
+		d.ReloadConfigFile("goguerrilla.conf.api")
+
+		if d.Config.LogFile != "./tests/testlog2" {
+			t.Error("d.Config.LogFile != \"./tests/testlog\"")
+		}
+
+		if d.Config.PidFile != "tests/go-guerrilla2.pid" {
+			t.Error("d.Config.LogFile != \"go-guerrilla.pid\"")
+		}
+
+		d.Shutdown()
+	}
+}
+
+func TestReopenLog(t *testing.T) {
+	os.Truncate("test/testlog", 0)
+	cfg := &AppConfig{LogFile: "tests/testlog"}
+	sc := ServerConfig{
+		ListenInterface: "127.0.0.1:2526",
+		IsEnabled:       true,
+	}
+	cfg.Servers = append(cfg.Servers, sc)
+	d := Daemon{Config: cfg}
+
+	err := d.Start()
+	if err != nil {
+		t.Error("start error", err)
+	} else {
+		d.ReopenLogs()
+		time.Sleep(time.Second * 2)
+
+		d.Shutdown()
+	}
+
+	b, err := ioutil.ReadFile("tests/testlog")
+	if err != nil {
+		t.Error("could not read logfile")
+		return
+	}
+	if strings.Index(string(b), "re-opened log file") < 0 {
+		t.Error("Server log did not re-opened, expecting \"re-opened log file\"")
+	}
+	if strings.Index(string(b), "re-opened main log file") < 0 {
+		t.Error("Main log did not re-opened, expecting \"re-opened main log file\"")
+	}
+}
+
+func TestSetConfig(t *testing.T) {
+
+	os.Truncate("test/testlog", 0)
+	cfg := AppConfig{LogFile: "tests/testlog"}
+	sc := ServerConfig{
+		ListenInterface: "127.0.0.1:2526",
+		IsEnabled:       true,
+	}
+	cfg.Servers = append(cfg.Servers, sc)
+	d := Daemon{Config: &cfg}
+
+	// lets add a new server
+	sc.ListenInterface = "127.0.0.1:2527"
+	cfg.Servers = append(cfg.Servers, sc)
+
+	err := d.SetConfig(cfg)
+	if err != nil {
+		t.Error("SetConfig returned an error:", err)
+		return
+	}
+
+	err = d.Start()
+	if err != nil {
+		t.Error("start error", err)
+	} else {
+
+		time.Sleep(time.Second * 2)
+
+		d.Shutdown()
+	}
+
+	b, err := ioutil.ReadFile("tests/testlog")
+	if err != nil {
+		t.Error("could not read logfile")
+		return
+	}
+	//fmt.Println(string(b))
+	// has 127.0.0.1:2527 started?
+	if strings.Index(string(b), "127.0.0.1:2527") < 0 {
+		t.Error("expecting 127.0.0.1:2527 to start")
+	}
+
+}
+
+func TestSetConfigError(t *testing.T) {
+
+	os.Truncate("tests/testlog", 0)
+	cfg := AppConfig{LogFile: "tests/testlog"}
+	sc := ServerConfig{
+		ListenInterface: "127.0.0.1:2526",
+		IsEnabled:       true,
+	}
+	cfg.Servers = append(cfg.Servers, sc)
+	d := Daemon{Config: &cfg}
+
+	// lets add a new server with bad TLS
+	sc.ListenInterface = "127.0.0.1:2527"
+	sc.StartTLSOn = true
+	sc.PublicKeyFile = "tests/testlog" // totally wrong :->
+	sc.PublicKeyFile = "tests/testlog" // totally wrong :->
+
+	cfg.Servers = append(cfg.Servers, sc)
+
+	err := d.SetConfig(cfg)
+	if err == nil {
+		t.Error("SetConfig should have returned an error compalning about bad tls settings")
+		return
+	}
+}
+
+var funkyLogger = func() backends.Decorator {
+
+	backends.Svc.AddInitializer(
+		backends.InitializeWith(
+			func(backendConfig backends.BackendConfig) error {
+				backends.Log().Info("Funky logger is up & down to funk!")
+				return nil
+			}),
+	)
+
+	backends.Svc.AddShutdowner(
+		backends.ShutdownWith(
+			func() error {
+				backends.Log().Info("The funk has been stopped!")
+				return nil
+			}),
+	)
+
+	return func(p backends.Processor) backends.Processor {
+		return backends.ProcessWith(
+			func(e *mail.Envelope, task backends.SelectTask) (backends.Result, error) {
+				if task == backends.TaskValidateRcpt {
+					// validate the last recipient appended to e.Rcpt
+					backends.Log().Infof(
+						"another funky recipient [%s]",
+						e.RcptTo[len(e.RcptTo)-1])
+					// if valid then forward call to the next processor in the chain
+					return p.Process(e, task)
+					// if invalid, return a backend result
+					//return backends.NewResult(response.Canned.FailRcptCmd), nil
+				} else if task == backends.TaskSaveMail {
+					backends.Log().Info("Another funky email!")
+				}
+				return p.Process(e, task)
+			})
+	}
+}
+
+// How about a custom processor?
+func TestSetAddProcessor(t *testing.T) {
+	os.Truncate("tests/testlog", 0)
+	cfg := &AppConfig{
+		LogFile:      "tests/testlog",
+		AllowedHosts: []string{"grr.la"},
+		BackendConfig: backends.BackendConfig{
+			"save_process":     "HeadersParser|Debugger|FunkyLogger",
+			"validate_process": "FunkyLogger",
+		},
+	}
+	d := Daemon{Config: cfg}
+	d.AddProcessor("FunkyLogger", funkyLogger)
+
+	d.Start()
+	// lets have a talk with the server
+	talkToServer("127.0.0.1:2525")
+
+	d.Shutdown()
+
+	b, err := ioutil.ReadFile("tests/testlog")
+	if err != nil {
+		t.Error("could not read logfile")
+		return
+	}
+	// lets check for fingerprints
+	if strings.Index(string(b), "another funky recipient") < 0 {
+		t.Error("did not log: another funky recipient")
+	}
+
+	if strings.Index(string(b), "Another funky email!") < 0 {
+		t.Error("Did not log: Another funky email!")
+	}
+
+	if strings.Index(string(b), "Funky logger is up & down to funk") < 0 {
+		t.Error("Did not log: Funky logger is up & down to funk")
+	}
+	if strings.Index(string(b), "The funk has been stopped!") < 0 {
+		t.Error("Did not log:The funk has been stopped!")
+	}
+
+}
+
+func talkToServer(address string) {
+
+	conn, err := net.Dial("tcp", address)
+	if err != nil {
+
+		return
+	}
+	in := bufio.NewReader(conn)
+	str, err := in.ReadString('\n')
+	//	fmt.Println(str)
+	fmt.Fprint(conn, "HELO maildiranasaurustester\r\n")
+	str, err = in.ReadString('\n')
+	//	fmt.Println(str)
+	fmt.Fprint(conn, "MAIL FROM:<[email protected]>r\r\n")
+	str, err = in.ReadString('\n')
+	//	fmt.Println(str)
+	fmt.Fprint(conn, "RCPT TO:[email protected]\r\n")
+	str, err = in.ReadString('\n')
+	//	fmt.Println(str)
+	fmt.Fprint(conn, "DATA\r\n")
+	str, err = in.ReadString('\n')
+	//	fmt.Println(str)
+	fmt.Fprint(conn, "Subject: Test subject\r\n")
+	fmt.Fprint(conn, "\r\n")
+	fmt.Fprint(conn, "A an email body\r\n")
+	fmt.Fprint(conn, ".\r\n")
+	str, err = in.ReadString('\n')
+	//	fmt.Println(str)
+	_ = str
+}
+
+// Test hot config reload
+// Here we forgot to add FunkyLogger so backend will fail to init
+
+func TestReloadConfig(t *testing.T) {
+	os.Truncate("tests/testlog", 0)
+	d := Daemon{}
+	d.Start()
+
+	cfg := AppConfig{
+		LogFile:      "tests/testlog",
+		AllowedHosts: []string{"grr.la"},
+		BackendConfig: backends.BackendConfig{
+			"save_process":     "HeadersParser|Debugger|FunkyLogger",
+			"validate_process": "FunkyLogger",
+		},
+	}
+	// Look mom, reloading the config without shutting down!
+	d.ReloadConfig(cfg)
+
+	d.Shutdown()
+}
+
+func TestPubSubAPI(t *testing.T) {
+
+	os.Truncate("tests/testlog", 0)
+
+	d := Daemon{Config: &AppConfig{LogFile: "tests/testlog"}}
+	d.Start()
+
+	// new config
+	cfg := AppConfig{
+		PidFile:      "tests/pidfilex.pid",
+		LogFile:      "tests/testlog",
+		AllowedHosts: []string{"grr.la"},
+		BackendConfig: backends.BackendConfig{
+			"save_process":     "HeadersParser|Debugger|FunkyLogger",
+			"validate_process": "FunkyLogger",
+		},
+	}
+
+	var i = 0
+	pidEvHandler := func(c *AppConfig) {
+		i++
+		if i > 1 {
+			t.Error("number > 1, it means d.Unsubscribe didn't work")
+		}
+		d.Logger.Info("number", i)
+	}
+	d.Subscribe(EventConfigPidFile, pidEvHandler)
+
+	d.ReloadConfig(cfg)
+
+	d.Unsubscribe(EventConfigPidFile, pidEvHandler)
+	cfg.PidFile = "tests/pidfile2.pid"
+	d.Publish(EventConfigPidFile, &cfg)
+	d.ReloadConfig(cfg)
+
+	b, err := ioutil.ReadFile("tests/testlog")
+	if err != nil {
+		t.Error("could not read logfile")
+		return
+	}
+	// lets interrogate the log
+	if strings.Index(string(b), "number1") < 0 {
+		t.Error("it lools like d.ReloadConfig(cfg) did not fire EventConfigPidFile, pidEvHandler not called")
+	}
+
+}
+
+func TestAPILog(t *testing.T) {
+	os.Truncate("tests/testlog", 0)
+	d := Daemon{}
+	l := d.Log()
+	l.Info("logtest1") // to stderr
+	if l.GetLevel() != log.InfoLevel.String() {
+		t.Error("Log level does not eq info, it is ", l.GetLevel())
+	}
+	d.Logger = nil
+	d.Config = &AppConfig{LogFile: "tests/testlog"}
+	l = d.Log()
+	l.Info("logtest1") // to tests/testlog
+
+	//
+	l = d.Log()
+	if l.GetLogDest() != "tests/testlog" {
+		t.Error("log dest is not tests/testlog, it was ", l.GetLogDest())
+	}
+
+	b, err := ioutil.ReadFile("tests/testlog")
+	if err != nil {
+		t.Error("could not read logfile")
+		return
+	}
+	// lets interrogate the log
+	if strings.Index(string(b), "logtest1") < 0 {
+		t.Error("hai was not found in the log, it should have been in tests/testlog")
+	}
+}

+ 0 - 157
backends/abstract.go

@@ -1,157 +0,0 @@
-package backends
-
-import (
-	"errors"
-	"fmt"
-	"github.com/flashmob/go-guerrilla/envelope"
-	"reflect"
-	"strings"
-)
-
-type AbstractBackend struct {
-	config abstractConfig
-	extend Backend
-}
-
-type abstractConfig struct {
-	LogReceivedMails bool `json:"log_received_mails"`
-}
-
-// Your backend should implement this method and set b.config field with a custom config struct
-// Therefore, your implementation would have your own custom config type instead of dummyConfig
-func (b *AbstractBackend) loadConfig(backendConfig BackendConfig) (err error) {
-	// Load the backend config for the backend. It has already been unmarshalled
-	// from the main config file 'backend' config "backend_config"
-	// Now we need to convert each type and copy into the dummyConfig struct
-	configType := baseConfig(&abstractConfig{})
-	bcfg, err := b.extractConfig(backendConfig, configType)
-	if err != nil {
-		return err
-	}
-	m := bcfg.(*abstractConfig)
-	b.config = *m
-	return nil
-}
-
-func (b *AbstractBackend) Initialize(config BackendConfig) error {
-	if b.extend != nil {
-		return b.extend.loadConfig(config)
-	}
-	err := b.loadConfig(config)
-	if err != nil {
-		return err
-	}
-	return nil
-}
-
-func (b *AbstractBackend) Shutdown() error {
-	if b.extend != nil {
-		return b.extend.Shutdown()
-	}
-	return nil
-}
-
-func (b *AbstractBackend) Process(mail *envelope.Envelope) BackendResult {
-	if b.extend != nil {
-		return b.extend.Process(mail)
-	}
-	mail.ParseHeaders()
-
-	if b.config.LogReceivedMails {
-		mainlog.Infof("Mail from: %s / to: %v", mail.MailFrom.String(), mail.RcptTo)
-		mainlog.Info("Headers are: %s", mail.Header)
-
-	}
-	return NewBackendResult("250 OK")
-}
-
-func (b *AbstractBackend) saveMailWorker(saveMailChan chan *savePayload) {
-	if b.extend != nil {
-		b.extend.saveMailWorker(saveMailChan)
-		return
-	}
-	defer func() {
-		if r := recover(); r != nil {
-			// recover form closed channel
-			fmt.Println("Recovered in f", r)
-		}
-		// close any connections / files
-		// ...
-
-	}()
-	for {
-		payload := <-saveMailChan
-		if payload == nil {
-			mainlog.Debug("No more saveMailChan payload")
-			return
-		}
-		// process the email here
-		result := b.Process(payload.mail)
-		// if all good
-		if result.Code() < 300 {
-			payload.savedNotify <- &saveStatus{nil, "s0m3l337Ha5hva1u3LOL"}
-		} else {
-			payload.savedNotify <- &saveStatus{errors.New(result.String()), "s0m3l337Ha5hva1u3LOL"}
-		}
-
-	}
-}
-
-func (b *AbstractBackend) getNumberOfWorkers() int {
-	if b.extend != nil {
-		return b.extend.getNumberOfWorkers()
-	}
-	return 1
-}
-
-func (b *AbstractBackend) testSettings() error {
-	if b.extend != nil {
-		return b.extend.testSettings()
-	}
-	return nil
-}
-
-// Load the backend config for the backend. It has already been unmarshalled
-// from the main config file 'backend' config "backend_config"
-// Now we need to convert each type and copy into the guerrillaDBAndRedisConfig struct
-// The reason why using reflection is because we'll get a nice error message if the field is missing
-// the alternative solution would be to json.Marshal() and json.Unmarshal() however that will not give us any
-// error messages
-func (h *AbstractBackend) extractConfig(configData BackendConfig, configType baseConfig) (interface{}, error) {
-	// Use reflection so that we can provide a nice error message
-	s := reflect.ValueOf(configType).Elem() // so that we can set the values
-	m := reflect.ValueOf(configType).Elem()
-	t := reflect.TypeOf(configType).Elem()
-	typeOfT := s.Type()
-
-	for i := 0; i < m.NumField(); i++ {
-		f := s.Field(i)
-		// read the tags of the config struct
-		field_name := t.Field(i).Tag.Get("json")
-		if len(field_name) > 0 {
-			// parse the tag to
-			// get the field name from struct tag
-			split := strings.Split(field_name, ",")
-			field_name = split[0]
-		} else {
-			// could have no tag
-			// so use the reflected field name
-			field_name = typeOfT.Field(i).Name
-		}
-		if f.Type().Name() == "int" {
-			if intVal, converted := configData[field_name].(float64); converted {
-				s.Field(i).SetInt(int64(intVal))
-			} else {
-				return configType, convertError("property missing/invalid: '" + field_name + "' of expected type: " + f.Type().Name())
-			}
-		}
-		if f.Type().Name() == "string" {
-			if stringVal, converted := configData[field_name].(string); converted {
-				s.Field(i).SetString(stringVal)
-			} else {
-				return configType, convertError("missing/invalid: '" + field_name + "' of type: " + f.Type().Name())
-			}
-		}
-	}
-	return configType, nil
-}

+ 207 - 137
backends/backend.go

@@ -1,77 +1,80 @@
 package backends
 package backends
 
 
 import (
 import (
-	"errors"
 	"fmt"
 	"fmt"
+	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
+	"reflect"
 	"strconv"
 	"strconv"
 	"strings"
 	"strings"
 	"sync"
 	"sync"
-	"time"
+	"sync/atomic"
+)
 
 
-	"github.com/flashmob/go-guerrilla/envelope"
-	"github.com/flashmob/go-guerrilla/log"
-	"github.com/flashmob/go-guerrilla/response"
+var (
+	Svc *service
+
+	// Store the constructor for making an new processor decorator.
+	processors map[string]ProcessorConstructor
+
+	b Backend
 )
 )
 
 
-var mainlog log.Logger
+func init() {
+	Svc = &service{}
+	processors = make(map[string]ProcessorConstructor)
+}
+
+type ProcessorConstructor func() Decorator
 
 
 // Backends process received mail. Depending on the implementation, they can store mail in the database,
 // Backends process received mail. Depending on the implementation, they can store mail in the database,
 // write to a file, check for spam, re-transmit to another server, etc.
 // write to a file, check for spam, re-transmit to another server, etc.
 // Must return an SMTP message (i.e. "250 OK") and a boolean indicating
 // Must return an SMTP message (i.e. "250 OK") and a boolean indicating
 // whether the message was processed successfully.
 // whether the message was processed successfully.
 type Backend interface {
 type Backend interface {
-	// Public methods
-	Process(*envelope.Envelope) BackendResult
+	// Process processes then saves the mail envelope
+	Process(*mail.Envelope) Result
+	// ValidateRcpt validates the last recipient that was pushed to the mail envelope
+	ValidateRcpt(e *mail.Envelope) RcptError
+	// Initializes the backend, eg. creates folders, sets-up database connections
 	Initialize(BackendConfig) error
 	Initialize(BackendConfig) error
+	// Initializes the backend after it was Shutdown()
+	Reinitialize() error
+	// Shutdown frees / closes anything created during initializations
 	Shutdown() error
 	Shutdown() error
-
-	// start save mail worker(s)
-	saveMailWorker(chan *savePayload)
-	// get the number of workers that will be stared
-	getNumberOfWorkers() int
-	// test database settings, permissions, correct paths, etc, before starting workers
-	testSettings() error
-	// parse the configuration files
-	loadConfig(BackendConfig) error
+	// Start Starts a backend that has been initialized
+	Start() error
 }
 }
 
 
 type BackendConfig map[string]interface{}
 type BackendConfig map[string]interface{}
 
 
-var backends = map[string]Backend{}
+// All config structs extend from this
+type BaseConfig interface{}
 
 
-type baseConfig interface{}
-
-type saveStatus struct {
-	err  error
-	hash string
+type notifyMsg struct {
+	err      error
+	queuedID string
 }
 }
 
 
-type savePayload struct {
-	mail        *envelope.Envelope
-	from        *envelope.EmailAddress
-	recipient   *envelope.EmailAddress
-	savedNotify chan *saveStatus
-}
-
-// BackendResult represents a response to an SMTP client after receiving DATA.
+// Result represents a response to an SMTP client after receiving DATA.
 // The String method should return an SMTP message ready to send back to the
 // The String method should return an SMTP message ready to send back to the
 // client, for example `250 OK: Message received`.
 // client, for example `250 OK: Message received`.
-type BackendResult interface {
+type Result interface {
 	fmt.Stringer
 	fmt.Stringer
 	// Code should return the SMTP code associated with this response, ie. `250`
 	// Code should return the SMTP code associated with this response, ie. `250`
 	Code() int
 	Code() int
 }
 }
 
 
 // Internal implementation of BackendResult for use by backend implementations.
 // Internal implementation of BackendResult for use by backend implementations.
-type backendResult string
+type result string
 
 
-func (br backendResult) String() string {
+func (br result) String() string {
 	return string(br)
 	return string(br)
 }
 }
 
 
 // Parses the SMTP code from the first 3 characters of the SMTP message.
 // Parses the SMTP code from the first 3 characters of the SMTP message.
 // Returns 554 if code cannot be parsed.
 // Returns 554 if code cannot be parsed.
-func (br backendResult) Code() int {
+func (br result) Code() int {
 	trimmed := strings.TrimSpace(string(br))
 	trimmed := strings.TrimSpace(string(br))
 	if len(trimmed) < 3 {
 	if len(trimmed) < 3 {
 		return 554
 		return 554
@@ -83,134 +86,201 @@ func (br backendResult) Code() int {
 	return code
 	return code
 }
 }
 
 
-func NewBackendResult(message string) BackendResult {
-	return backendResult(message)
+func NewResult(message string) Result {
+	return result(message)
 }
 }
 
 
-// A backend gateway is a proxy that implements the Backend interface.
-// It is used to start multiple goroutine workers for saving mail, and then distribute email saving to the workers
-// via a channel. Shutting down via Shutdown() will stop all workers.
-// The rest of this program always talks to the backend via this gateway.
-type BackendGateway struct {
-	AbstractBackend
-	saveMailChan chan *savePayload
-	// waits for backend workers to start/stop
-	wg sync.WaitGroup
-	b  Backend
-	// controls access to state
-	stateGuard sync.Mutex
-	State      backendState
-	config     BackendConfig
+type processorInitializer interface {
+	Initialize(backendConfig BackendConfig) error
 }
 }
 
 
-// possible values for state
-const (
-	BackendStateRunning = iota
-	BackendStateShuttered
-	BackendStateError
-)
+type processorShutdowner interface {
+	Shutdown() error
+}
 
 
-type backendState int
+type InitializeWith func(backendConfig BackendConfig) error
+type ShutdownWith func() error
 
 
-func (s backendState) String() string {
-	return strconv.Itoa(int(s))
+// Satisfy ProcessorInitializer interface
+// So we can now pass an anonymous function that implements ProcessorInitializer
+func (i InitializeWith) Initialize(backendConfig BackendConfig) error {
+	// delegate to the anonymous function
+	return i(backendConfig)
 }
 }
 
 
-// New retrieve a backend specified by the backendName, and initialize it using
-// backendConfig
-func New(backendName string, backendConfig BackendConfig, l log.Logger) (Backend, error) {
-	backend, found := backends[backendName]
-	mainlog = l
-	if !found {
-		return nil, fmt.Errorf("backend %q not found", backendName)
+// satisfy ProcessorShutdowner interface, same concept as InitializeWith type
+func (s ShutdownWith) Shutdown() error {
+	// delegate
+	return s()
+}
+
+type Errors []error
+
+// implement the Error interface
+func (e Errors) Error() string {
+	if len(e) == 1 {
+		return e[0].Error()
 	}
 	}
-	gateway := &BackendGateway{b: backend, config: backendConfig}
-	err := gateway.Initialize(backendConfig)
-	if err != nil {
-		return nil, fmt.Errorf("error while initializing the backend: %s", err)
+	// multiple errors
+	msg := ""
+	for _, err := range e {
+		msg += "\n" + err.Error()
 	}
 	}
-	gateway.State = BackendStateRunning
-	return gateway, nil
+	return msg
 }
 }
 
 
-// Process distributes an envelope to one of the backend workers
-func (gw *BackendGateway) Process(e *envelope.Envelope) BackendResult {
-	if gw.State != BackendStateRunning {
-		return NewBackendResult(response.Canned.FailBackendNotRunning + gw.State.String())
-	}
+func convertError(name string) error {
+	return fmt.Errorf("failed to load backend config (%s)", name)
+}
 
 
-	to := e.RcptTo
-	from := e.MailFrom
-
-	// place on the channel so that one of the save mail workers can pick it up
-	// TODO: support multiple recipients
-	savedNotify := make(chan *saveStatus)
-	gw.saveMailChan <- &savePayload{e, &from, &to[0], savedNotify}
-	// wait for the save to complete
-	// or timeout
-	select {
-	case status := <-savedNotify:
-		if status.err != nil {
-			return NewBackendResult(response.Canned.FailBackendTransaction + status.err.Error())
-		}
-		return NewBackendResult(response.Canned.SuccessMessageQueued + status.hash)
+type service struct {
+	initializers []processorInitializer
+	shutdowners  []processorShutdowner
+	sync.Mutex
+	mainlog atomic.Value
+}
 
 
-	case <-time.After(time.Second * 30):
-		mainlog.Infof("Backend has timed out")
-		return NewBackendResult(response.Canned.FailBackendTimeout)
+// Get loads the log.logger in an atomic operation. Returns a stderr logger if not able to load
+func Log() log.Logger {
+	if v, ok := Svc.mainlog.Load().(log.Logger); ok {
+		return v
 	}
 	}
+	l, _ := log.GetLogger(log.OutputStderr.String(), log.InfoLevel.String())
+	return l
+}
+
+func (s *service) SetMainlog(l log.Logger) {
+	s.mainlog.Store(l)
+}
+
+// AddInitializer adds a function that implements ProcessorShutdowner to be called when initializing
+func (s *service) AddInitializer(i processorInitializer) {
+	s.Lock()
+	defer s.Unlock()
+	s.initializers = append(s.initializers, i)
+}
+
+// AddShutdowner adds a function that implements ProcessorShutdowner to be called when shutting down
+func (s *service) AddShutdowner(sh processorShutdowner) {
+	s.Lock()
+	defer s.Unlock()
+	s.shutdowners = append(s.shutdowners, sh)
 }
 }
-func (gw *BackendGateway) Shutdown() error {
-	gw.stateGuard.Lock()
-	defer gw.stateGuard.Unlock()
-	if gw.State != BackendStateShuttered {
-		err := gw.b.Shutdown()
-		if err == nil {
-			close(gw.saveMailChan) // workers will stop
-			gw.wg.Wait()
-			gw.State = BackendStateShuttered
+
+// reset clears the initializers and Shutdowners
+func (s *service) reset() {
+	s.shutdowners = make([]processorShutdowner, 0)
+	s.initializers = make([]processorInitializer, 0)
+}
+
+// Initialize initializes all the processors one-by-one and returns any errors.
+// Subsequent calls to Initialize will not call the initializer again unless it failed on the previous call
+// so Initialize may be called again to retry after getting errors
+func (s *service) initialize(backend BackendConfig) Errors {
+	s.Lock()
+	defer s.Unlock()
+	var errors Errors
+	failed := make([]processorInitializer, 0)
+	for i := range s.initializers {
+		if err := s.initializers[i].Initialize(backend); err != nil {
+			errors = append(errors, err)
+			failed = append(failed, s.initializers[i])
 		}
 		}
-		return err
 	}
 	}
-	return nil
+	// keep only the failed initializers
+	s.initializers = failed
+	return errors
 }
 }
 
 
-// Reinitialize starts up a backend gateway that was shutdown before
-func (gw *BackendGateway) Reinitialize() error {
-	if gw.State != BackendStateShuttered {
-		return errors.New("backend must be in BackendStateshuttered state to Reinitialize")
+// Shutdown shuts down all the processors by calling their shutdowners (if any)
+// Subsequent calls to Shutdown will not call the shutdowners again unless it failed on the previous call
+// so Shutdown may be called again to retry after getting errors
+func (s *service) shutdown() Errors {
+	s.Lock()
+	defer s.Unlock()
+	var errors Errors
+	failed := make([]processorShutdowner, 0)
+	for i := range s.shutdowners {
+		if err := s.shutdowners[i].Shutdown(); err != nil {
+			errors = append(errors, err)
+			failed = append(failed, s.shutdowners[i])
+		}
 	}
 	}
-	err := gw.Initialize(gw.config)
-	if err != nil {
-		return fmt.Errorf("error while initializing the backend: %s", err)
+	s.shutdowners = failed
+	return errors
+}
+
+// AddProcessor adds a new processor, which becomes available to the backend_config.save_process option
+// and also the backend_config.validate_process option
+// Use to add your own custom processor when using backends as a package, or after importing an external
+// processor.
+func (s *service) AddProcessor(name string, p ProcessorConstructor) {
+	// wrap in a constructor since we want to defer calling it
+	var c ProcessorConstructor
+	c = func() Decorator {
+		return p()
 	}
 	}
-	gw.State = BackendStateRunning
-	return err
+	// add to our processors list
+	processors[strings.ToLower(name)] = c
 }
 }
 
 
-func (gw *BackendGateway) Initialize(cfg BackendConfig) error {
-	err := gw.b.Initialize(cfg)
-	if err == nil {
-		workersSize := gw.b.getNumberOfWorkers()
-		if workersSize < 1 {
-			gw.State = BackendStateError
-			return errors.New("Must have at least 1 worker")
+// extractConfig loads the backend config. It has already been unmarshalled
+// configData contains data from the main config file's "backend_config" value
+// configType is a Processor's specific config value.
+// The reason why using reflection is because we'll get a nice error message if the field is missing
+// the alternative solution would be to json.Marshal() and json.Unmarshal() however that will not give us any
+// error messages
+func (s *service) ExtractConfig(configData BackendConfig, configType BaseConfig) (interface{}, error) {
+	// Use reflection so that we can provide a nice error message
+	v := reflect.ValueOf(configType).Elem() // so that we can set the values
+	//m := reflect.ValueOf(configType).Elem()
+	t := reflect.TypeOf(configType).Elem()
+	typeOfT := v.Type()
+
+	for i := 0; i < v.NumField(); i++ {
+		f := v.Field(i)
+		// read the tags of the config struct
+		field_name := t.Field(i).Tag.Get("json")
+		omitempty := false
+		if len(field_name) > 0 {
+			// parse the tag to
+			// get the field name from struct tag
+			split := strings.Split(field_name, ",")
+			field_name = split[0]
+			if len(split) > 1 {
+				if split[1] == "omitempty" {
+					omitempty = true
+				}
+			}
+		} else {
+			// could have no tag
+			// so use the reflected field name
+			field_name = typeOfT.Field(i).Name
+		}
+		if f.Type().Name() == "int" {
+			// in json, there is no int, only floats...
+			if intVal, converted := configData[field_name].(float64); converted {
+				v.Field(i).SetInt(int64(intVal))
+			} else if intVal, converted := configData[field_name].(int); converted {
+				v.Field(i).SetInt(int64(intVal))
+			} else if !omitempty {
+				return configType, convertError("property missing/invalid: '" + field_name + "' of expected type: " + f.Type().Name())
+			}
 		}
 		}
-		if err := gw.b.testSettings(); err != nil {
-			gw.State = BackendStateError
-			return err
+		if f.Type().Name() == "string" {
+			if stringVal, converted := configData[field_name].(string); converted {
+				v.Field(i).SetString(stringVal)
+			} else if !omitempty {
+				return configType, convertError("missing/invalid: '" + field_name + "' of type: " + f.Type().Name())
+			}
 		}
 		}
-		gw.saveMailChan = make(chan *savePayload, workersSize)
-		// start our savemail workers
-		gw.wg.Add(workersSize)
-		for i := 0; i < workersSize; i++ {
-			go func() {
-				gw.b.saveMailWorker(gw.saveMailChan)
-				gw.wg.Done()
-			}()
+		if f.Type().Name() == "bool" {
+			if boolVal, converted := configData[field_name].(bool); converted {
+				v.Field(i).SetBool(boolVal)
+			} else if !omitempty {
+				return configType, convertError("missing/invalid: '" + field_name + "' of type: " + f.Type().Name())
+			}
 		}
 		}
-	} else {
-		gw.State = BackendStateError
 	}
 	}
-	return err
+	return configType, nil
 }
 }

+ 13 - 0
backends/decorate.go

@@ -0,0 +1,13 @@
+package backends
+
+// We define what a decorator to our processor will look like
+type Decorator func(Processor) Processor
+
+// Decorate will decorate a processor with a slice of passed decorators
+func Decorate(c Processor, ds ...Decorator) Processor {
+	decorated := c
+	for _, decorate := range ds {
+		decorated = decorate(decorated)
+	}
+	return decorated
+}

+ 0 - 37
backends/dummy.go

@@ -1,37 +0,0 @@
-package backends
-
-func init() {
-	// decorator pattern
-	backends["dummy"] = &AbstractBackend{
-		extend: &DummyBackend{},
-	}
-}
-
-// custom configuration we will parse from the json
-// see guerrillaDBAndRedisConfig struct for a more complete example
-type dummyConfig struct {
-	LogReceivedMails bool `json:"log_received_mails"`
-}
-
-// putting all the paces we need together
-type DummyBackend struct {
-	config dummyConfig
-	// embed functions form AbstractBackend so that DummyBackend satisfies the Backend interface
-	AbstractBackend
-}
-
-// Backends should implement this method and set b.config field with a custom config struct
-// Therefore, your implementation would have a custom config type instead of dummyConfig
-func (b *DummyBackend) loadConfig(backendConfig BackendConfig) (err error) {
-	// Load the backend config for the backend. It has already been unmarshalled
-	// from the main config file 'backend' config "backend_config"
-	// Now we need to convert each type and copy into the dummyConfig struct
-	configType := baseConfig(&dummyConfig{})
-	bcfg, err := b.extractConfig(backendConfig, configType)
-	if err != nil {
-		return err
-	}
-	m := bcfg.(*dummyConfig)
-	b.config = *m
-	return nil
-}

+ 459 - 0
backends/gateway.go

@@ -0,0 +1,459 @@
+package backends
+
+import (
+	"errors"
+	"fmt"
+	"strconv"
+	"sync"
+	"time"
+
+	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
+	"github.com/flashmob/go-guerrilla/response"
+	"runtime/debug"
+	"strings"
+)
+
+var ErrProcessorNotFound error
+
+// A backend gateway is a proxy that implements the Backend interface.
+// It is used to start multiple goroutine workers for saving mail, and then distribute email saving to the workers
+// via a channel. Shutting down via Shutdown() will stop all workers.
+// The rest of this program always talks to the backend via this gateway.
+type BackendGateway struct {
+	// channel for distributing envelopes to workers
+	conveyor chan *workerMsg
+
+	// waits for backend workers to start/stop
+	wg           sync.WaitGroup
+	workStoppers []chan bool
+	processors   []Processor
+	validators   []Processor
+
+	// controls access to state
+	sync.Mutex
+	State    backendState
+	config   BackendConfig
+	gwConfig *GatewayConfig
+}
+
+type GatewayConfig struct {
+	// WorkersSize controls how many concurrent workers to start. Defaults to 1
+	WorkersSize int `json:"save_workers_size,omitempty"`
+	// SaveProcess controls which processors to chain in a stack for saving email tasks
+	SaveProcess string `json:"save_process,omitempty"`
+	// ValidateProcess is like ProcessorStack, but for recipient validation tasks
+	ValidateProcess string `json:"validate_process,omitempty"`
+	// TimeoutSave is duration before timeout when saving an email, eg "29s"
+	TimeoutSave string `json:"gw_save_timeout,omitempty"`
+	// TimeoutValidateRcpt duration before timeout when validating a recipient, eg "1s"
+	TimeoutValidateRcpt string `json:"gw_val_rcpt_timeout,omitempty"`
+}
+
+// workerMsg is what get placed on the BackendGateway.saveMailChan channel
+type workerMsg struct {
+	// The email data
+	e *mail.Envelope
+	// notifyMe is used to notify the gateway of workers finishing their processing
+	notifyMe chan *notifyMsg
+	// select the task type
+	task SelectTask
+}
+
+type backendState int
+
+// possible values for state
+const (
+	BackendStateNew backendState = iota
+	BackendStateRunning
+	BackendStateShuttered
+	BackendStateError
+	BackendStateInitialized
+
+	// default timeout for saving email, if 'gw_save_timeout' not present in config
+	saveTimeout = time.Second * 30
+	// default timeout for validating rcpt to, if 'gw_val_rcpt_timeout' not present in config
+	validateRcptTimeout = time.Second * 5
+	defaultProcessor    = "Debugger"
+)
+
+func (s backendState) String() string {
+	switch s {
+	case BackendStateNew:
+		return "NewState"
+	case BackendStateRunning:
+		return "RunningState"
+	case BackendStateShuttered:
+		return "ShutteredState"
+	case BackendStateError:
+		return "ErrorSate"
+	case BackendStateInitialized:
+		return "InitializedState"
+	}
+	return strconv.Itoa(int(s))
+}
+
+// New makes a new default BackendGateway backend, and initializes it using
+// backendConfig and stores the logger
+func New(backendConfig BackendConfig, l log.Logger) (Backend, error) {
+	Svc.SetMainlog(l)
+	gateway := &BackendGateway{}
+	err := gateway.Initialize(backendConfig)
+	if err != nil {
+		return nil, fmt.Errorf("error while initializing the backend: %s", err)
+	}
+	// keep the config known to be good.
+	gateway.config = backendConfig
+
+	b = Backend(gateway)
+	return b, nil
+}
+
+var workerMsgPool = sync.Pool{
+	// if not available, then create a new one
+	New: func() interface{} {
+		return &workerMsg{}
+	},
+}
+
+// reset resets a workerMsg that has been borrowed from the pool
+func (w *workerMsg) reset(e *mail.Envelope, task SelectTask) {
+	if w.notifyMe == nil {
+		w.notifyMe = make(chan *notifyMsg)
+	}
+	w.e = e
+	w.task = task
+}
+
+// Process distributes an envelope to one of the backend workers with a TaskSaveMail task
+func (gw *BackendGateway) Process(e *mail.Envelope) Result {
+	if gw.State != BackendStateRunning {
+		return NewResult(response.Canned.FailBackendNotRunning + gw.State.String())
+	}
+	// borrow a workerMsg from the pool
+	workerMsg := workerMsgPool.Get().(*workerMsg)
+	workerMsg.reset(e, TaskSaveMail)
+	// place on the channel so that one of the save mail workers can pick it up
+	gw.conveyor <- workerMsg
+	// wait for the save to complete
+	// or timeout
+	select {
+	case status := <-workerMsg.notifyMe:
+		defer workerMsgPool.Put(workerMsg) // can be recycled since we used the notifyMe channel
+		if status.err != nil {
+			return NewResult(response.Canned.FailBackendTransaction + status.err.Error())
+		}
+		return NewResult(response.Canned.SuccessMessageQueued + status.queuedID)
+
+	case <-time.After(gw.saveTimeout()):
+		Log().Error("Backend has timed out while saving eamil")
+		return NewResult(response.Canned.FailBackendTimeout)
+	}
+}
+
+// ValidateRcpt asks one of the workers to validate the recipient
+// Only the last recipient appended to e.RcptTo will be validated.
+func (gw *BackendGateway) ValidateRcpt(e *mail.Envelope) RcptError {
+	if gw.State != BackendStateRunning {
+		return StorageNotAvailable
+	}
+	if _, ok := gw.validators[0].(NoopProcessor); ok {
+		// no validator processors configured
+		return nil
+	}
+	// place on the channel so that one of the save mail workers can pick it up
+	workerMsg := workerMsgPool.Get().(*workerMsg)
+	workerMsg.reset(e, TaskValidateRcpt)
+	gw.conveyor <- workerMsg
+	// wait for the validation to complete
+	// or timeout
+	select {
+	case status := <-workerMsg.notifyMe:
+		if status.err != nil {
+			return status.err
+		}
+		return nil
+
+	case <-time.After(gw.validateRcptTimeout()):
+		Log().Error("Backend has timed out while validating rcpt")
+		return StorageTimeout
+	}
+}
+
+// Shutdown shuts down the backend and leaves it in BackendStateShuttered state
+func (gw *BackendGateway) Shutdown() error {
+	gw.Lock()
+	defer gw.Unlock()
+	if gw.State != BackendStateShuttered {
+		// send a signal to all workers
+		gw.stopWorkers()
+		// wait for workers to stop
+		gw.wg.Wait()
+		// call shutdown on all processor shutdowners
+		if err := Svc.shutdown(); err != nil {
+			return err
+		}
+		gw.State = BackendStateShuttered
+	}
+	return nil
+}
+
+// Reinitialize initializes the gateway with the existing config after it was shutdown
+func (gw *BackendGateway) Reinitialize() error {
+	if gw.State != BackendStateShuttered {
+		return errors.New("backend must be in BackendStateshuttered state to Reinitialize")
+	}
+	// clear the Initializers and Shutdowners
+	Svc.reset()
+
+	err := gw.Initialize(gw.config)
+	if err != nil {
+		fmt.Println("reinitialize to ", gw.config, err)
+		return fmt.Errorf("error while initializing the backend: %s", err)
+	}
+
+	return err
+}
+
+// newStack creates a new Processor by chaining multiple Processors in a call stack
+// Decorators are functions of Decorator type, source files prefixed with p_*
+// Each decorator does a specific task during the processing stage.
+// This function uses the config value save_process or validate_process to figure out which Decorator to use
+func (gw *BackendGateway) newStack(stackConfig string) (Processor, error) {
+	var decorators []Decorator
+	cfg := strings.ToLower(strings.TrimSpace(stackConfig))
+	if len(cfg) == 0 {
+		//cfg = strings.ToLower(defaultProcessor)
+		return NoopProcessor{}, nil
+	}
+	items := strings.Split(cfg, "|")
+	for i := range items {
+		name := items[len(items)-1-i] // reverse order, since decorators are stacked
+		if makeFunc, ok := processors[name]; ok {
+			decorators = append(decorators, makeFunc())
+		} else {
+			ErrProcessorNotFound = errors.New(fmt.Sprintf("processor [%s] not found", name))
+			return nil, ErrProcessorNotFound
+		}
+	}
+	// build the call-stack of decorators
+	p := Decorate(DefaultProcessor{}, decorators...)
+	return p, nil
+}
+
+// loadConfig loads the config for the GatewayConfig
+func (gw *BackendGateway) loadConfig(cfg BackendConfig) error {
+	configType := BaseConfig(&GatewayConfig{})
+	// Note: treat config values as immutable
+	// if you need to change a config value, change in the file then
+	// send a SIGHUP
+	bcfg, err := Svc.ExtractConfig(cfg, configType)
+	if err != nil {
+		return err
+	}
+	gw.gwConfig = bcfg.(*GatewayConfig)
+	return nil
+}
+
+// Initialize builds the workers and initializes each one
+func (gw *BackendGateway) Initialize(cfg BackendConfig) error {
+	gw.Lock()
+	defer gw.Unlock()
+	if gw.State != BackendStateNew && gw.State != BackendStateShuttered {
+		return errors.New("Can only Initialize in BackendStateNew or BackendStateShuttered state")
+	}
+	err := gw.loadConfig(cfg)
+	if err != nil {
+		gw.State = BackendStateError
+		return err
+	}
+	workersSize := gw.workersSize()
+	if workersSize < 1 {
+		gw.State = BackendStateError
+		return errors.New("Must have at least 1 worker")
+	}
+	gw.processors = make([]Processor, 0)
+	gw.validators = make([]Processor, 0)
+	for i := 0; i < workersSize; i++ {
+		p, err := gw.newStack(gw.gwConfig.SaveProcess)
+		if err != nil {
+			gw.State = BackendStateError
+			return err
+		}
+		gw.processors = append(gw.processors, p)
+
+		v, err := gw.newStack(gw.gwConfig.ValidateProcess)
+		if err != nil {
+			gw.State = BackendStateError
+			return err
+		}
+		gw.validators = append(gw.validators, v)
+	}
+	// initialize processors
+	if err := Svc.initialize(cfg); err != nil {
+		gw.State = BackendStateError
+		return err
+	}
+	if gw.conveyor == nil {
+		gw.conveyor = make(chan *workerMsg, workersSize)
+	}
+	// ready to start
+	gw.State = BackendStateInitialized
+	return nil
+}
+
+// Start starts the worker goroutines, assuming it has been initialized or shuttered before
+func (gw *BackendGateway) Start() error {
+	gw.Lock()
+	defer gw.Unlock()
+	if gw.State == BackendStateInitialized || gw.State == BackendStateShuttered {
+		// we start our workers
+		workersSize := gw.workersSize()
+		// make our slice of channels for stopping
+		gw.workStoppers = make([]chan bool, 0)
+		// set the wait group
+		gw.wg.Add(workersSize)
+
+		for i := 0; i < workersSize; i++ {
+			stop := make(chan bool)
+			go func(workerId int, stop chan bool) {
+				// blocks here until the worker exits
+				for {
+					state := gw.workDispatcher(
+						gw.conveyor,
+						gw.processors[workerId],
+						gw.validators[workerId],
+						workerId+1,
+						stop)
+					// keep running after panic
+					if state != dispatcherStatePanic {
+						break
+					}
+				}
+				gw.wg.Done()
+			}(i, stop)
+			gw.workStoppers = append(gw.workStoppers, stop)
+		}
+		gw.State = BackendStateRunning
+		return nil
+	} else {
+		return errors.New(fmt.Sprintf("cannot start backend because it's in %s state", gw.State))
+	}
+}
+
+// workersSize gets the number of workers to use for saving email by reading the save_workers_size config value
+// Returns 1 if no config value was set
+func (gw *BackendGateway) workersSize() int {
+	if gw.gwConfig.WorkersSize <= 0 {
+		return 1
+	}
+	return gw.gwConfig.WorkersSize
+}
+
+// saveTimeout returns the maximum amount of seconds to wait before timing out a save processing task
+func (gw *BackendGateway) saveTimeout() time.Duration {
+	if gw.gwConfig.TimeoutSave == "" {
+		return saveTimeout
+	}
+	t, err := time.ParseDuration(gw.gwConfig.TimeoutSave)
+	if err != nil {
+		return saveTimeout
+	}
+	return t
+}
+
+// validateRcptTimeout returns the maximum amount of seconds to wait before timing out a recipient validation  task
+func (gw *BackendGateway) validateRcptTimeout() time.Duration {
+	if gw.gwConfig.TimeoutValidateRcpt == "" {
+		return validateRcptTimeout
+	}
+	t, err := time.ParseDuration(gw.gwConfig.TimeoutValidateRcpt)
+	if err != nil {
+		return validateRcptTimeout
+	}
+	return t
+}
+
+type dispatcherState int
+
+const (
+	dispatcherStateStopped dispatcherState = iota
+	dispatcherStateIdle
+	dispatcherStateWorking
+	dispatcherStateNotify
+	dispatcherStatePanic
+)
+
+func (gw *BackendGateway) workDispatcher(
+	workIn chan *workerMsg,
+	save Processor,
+	validate Processor,
+	workerId int,
+	stop chan bool) (state dispatcherState) {
+
+	var msg *workerMsg
+
+	defer func() {
+
+		// panic recovery mechanism: it may panic when processing
+		// since processors may call arbitrary code, some may be 3rd party / unstable
+		// we need to detect the panic, and notify the backend that it failed & unlock the envelope
+		if r := recover(); r != nil {
+			Log().Error("worker recovered from panic:", r, string(debug.Stack()))
+
+			if state == dispatcherStateWorking {
+				msg.notifyMe <- &notifyMsg{err: errors.New("storage failed")}
+				msg.e.Unlock()
+			}
+			state = dispatcherStatePanic
+			return
+		}
+		// state is dispatcherStateStopped if it reached here
+		return
+
+	}()
+	state = dispatcherStateIdle
+	Log().Infof("processing worker started (#%d)", workerId)
+	for {
+		select {
+		case <-stop:
+			state = dispatcherStateStopped
+			Log().Infof("stop signal for worker (#%d)", workerId)
+			return
+		case msg = <-workIn:
+			msg.e.Lock()
+			state = dispatcherStateWorking
+			if msg.task == TaskSaveMail {
+				// process the email here
+				result, _ := save.Process(msg.e, TaskSaveMail)
+				state = dispatcherStateNotify
+				if result.Code() < 300 {
+					// if all good, let the gateway know that it was queued
+					msg.notifyMe <- &notifyMsg{nil, msg.e.QueuedId}
+				} else {
+					// notify the gateway about the error
+					msg.notifyMe <- &notifyMsg{err: errors.New(result.String())}
+				}
+			} else if msg.task == TaskValidateRcpt {
+				_, err := validate.Process(msg.e, TaskValidateRcpt)
+				state = dispatcherStateNotify
+				if err != nil {
+					// validation failed
+					msg.notifyMe <- &notifyMsg{err: err}
+				} else {
+					// all good.
+					msg.notifyMe <- &notifyMsg{err: nil}
+				}
+			}
+			msg.e.Unlock()
+		}
+		state = dispatcherStateIdle
+	}
+}
+
+// stopWorkers sends a signal to all workers to stop
+func (gw *BackendGateway) stopWorkers() {
+	for i := range gw.workStoppers {
+		gw.workStoppers[i] <- true
+	}
+}

+ 113 - 0
backends/gateway_test.go

@@ -0,0 +1,113 @@
+package backends
+
+import (
+	"fmt"
+	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
+	"strings"
+	"testing"
+	"time"
+)
+
+func TestStates(t *testing.T) {
+	gw := BackendGateway{}
+	str := fmt.Sprintf("%s", gw.State)
+	if strings.Index(str, "NewState") != 0 {
+		t.Error("Backend should begin in NewState")
+	}
+}
+
+func TestInitialize(t *testing.T) {
+	c := BackendConfig{
+		"save_process":       "HeadersParser|Debugger",
+		"log_received_mails": true,
+		"save_workers_size":  "1",
+	}
+
+	gateway := &BackendGateway{}
+	err := gateway.Initialize(c)
+	if err != nil {
+		t.Error("Gateway did not init because:", err)
+		t.Fail()
+	}
+	if gateway.processors == nil {
+		t.Error("gateway.chains should not be nil")
+	} else if len(gateway.processors) != 1 {
+		t.Error("len(gateway.chains) should be 1, but got", len(gateway.processors))
+	}
+
+	if gateway.conveyor == nil {
+		t.Error("gateway.conveyor should not be nil")
+	} else if cap(gateway.conveyor) != gateway.workersSize() {
+		t.Error("gateway.conveyor channel buffer cap does not match worker size, cap was", cap(gateway.conveyor))
+	}
+
+	if gateway.State != BackendStateInitialized {
+		t.Error("gateway.State is not in initialized state, got ", gateway.State)
+	}
+
+}
+
+func TestStartProcessStop(t *testing.T) {
+	c := BackendConfig{
+		"save_process":       "HeadersParser|Debugger",
+		"log_received_mails": true,
+		"save_workers_size":  2,
+	}
+
+	gateway := &BackendGateway{}
+	err := gateway.Initialize(c)
+
+	mainlog, _ := log.GetLogger(log.OutputOff.String(), "debug")
+	Svc.SetMainlog(mainlog)
+
+	if err != nil {
+		t.Error("Gateway did not init because:", err)
+		t.Fail()
+	}
+	err = gateway.Start()
+	if err != nil {
+		t.Error("Gateway did not start because:", err)
+		t.Fail()
+	}
+	if gateway.State != BackendStateRunning {
+		t.Error("gateway.State is not in rinning state, got ", gateway.State)
+	}
+	// can we place an envelope on the conveyor channel?
+
+	e := &mail.Envelope{
+		RemoteIP: "127.0.0.1",
+		QueuedId: "abc12345",
+		Helo:     "helo.example.com",
+		MailFrom: mail.Address{User: "test", Host: "example.com"},
+		TLS:      true,
+	}
+	e.PushRcpt(mail.Address{User: "test", Host: "example.com"})
+	e.Data.WriteString("Subject:Test\n\nThis is a test.")
+	notify := make(chan *notifyMsg)
+
+	gateway.conveyor <- &workerMsg{e, notify, TaskSaveMail}
+
+	// it should not produce any errors
+	// headers (subject) should be parsed.
+
+	select {
+	case status := <-notify:
+
+		if status.err != nil {
+			t.Error("envelope processing failed with:", status.err)
+		}
+		if e.Header["Subject"][0] != "Test" {
+			t.Error("envelope processing did not parse header")
+		}
+
+	case <-time.After(time.Second):
+		t.Error("gateway did not respond after 1 second")
+		t.Fail()
+	}
+
+	err = gateway.Shutdown()
+	if err != nil {
+		t.Error("Gateway did not shutdown")
+	}
+}

+ 0 - 448
backends/guerrilla_db_redis.go

@@ -1,448 +0,0 @@
-package backends
-
-// This backend is presented here as an example only, please modify it to your needs.
-// The backend stores the email data in Redis.
-// Other meta-information is stored in MySQL to be joined later.
-// A lot of email gets discarded without viewing on Guerrilla Mail,
-// so it's much faster to put in Redis, where other programs can
-// process it later, without touching the disk.
-//
-// Some features:
-// - It batches the SQL inserts into a single query and inserts either after a time threshold or if the batch is full
-// - If the mysql driver crashes, it's able to recover, log the incident and resume again.
-// - It also does a clean shutdown - it tries to save everything before returning
-//
-// Short history:
-// Started with issuing an insert query for each single email and another query to update the tally
-// Then applied the following optimizations:
-// - Moved tally updates to another background process which does the tallying in a single query
-// - Changed the MySQL queries to insert in batch
-// - Made a Compressor that recycles buffers using sync.Pool
-// The result was around 400% speed improvement. If you know of any more improvements, please share!
-// - Added the recovery mechanism,
-
-import (
-	"fmt"
-
-	"time"
-
-	"github.com/garyburd/redigo/redis"
-
-	"bytes"
-	"compress/zlib"
-	"database/sql"
-	_ "github.com/go-sql-driver/mysql"
-
-	"github.com/go-sql-driver/mysql"
-	"io"
-	"runtime/debug"
-	"strings"
-	"sync"
-)
-
-// how many rows to batch at a time
-const GuerrillaDBAndRedisBatchMax = 2
-
-// tick on every...
-const GuerrillaDBAndRedisBatchTimeout = time.Second * 3
-
-func init() {
-	backends["guerrilla-db-redis"] = &AbstractBackend{
-		extend: &GuerrillaDBAndRedisBackend{}}
-}
-
-type GuerrillaDBAndRedisBackend struct {
-	AbstractBackend
-	config    guerrillaDBAndRedisConfig
-	batcherWg sync.WaitGroup
-	// cache prepared queries
-	cache stmtCache
-}
-
-// statement cache. It's an array, not slice
-type stmtCache [GuerrillaDBAndRedisBatchMax]*sql.Stmt
-
-type guerrillaDBAndRedisConfig struct {
-	NumberOfWorkers    int    `json:"save_workers_size"`
-	MysqlTable         string `json:"mail_table"`
-	MysqlDB            string `json:"mysql_db"`
-	MysqlHost          string `json:"mysql_host"`
-	MysqlPass          string `json:"mysql_pass"`
-	MysqlUser          string `json:"mysql_user"`
-	RedisExpireSeconds int    `json:"redis_expire_seconds"`
-	RedisInterface     string `json:"redis_interface"`
-	PrimaryHost        string `json:"primary_mail_host"`
-}
-
-func convertError(name string) error {
-	return fmt.Errorf("failed to load backend config (%s)", name)
-}
-
-// Load the backend config for the backend. It has already been unmarshalled
-// from the main config file 'backend' config "backend_config"
-// Now we need to convert each type and copy into the guerrillaDBAndRedisConfig struct
-func (g *GuerrillaDBAndRedisBackend) loadConfig(backendConfig BackendConfig) (err error) {
-	configType := baseConfig(&guerrillaDBAndRedisConfig{})
-	bcfg, err := g.extractConfig(backendConfig, configType)
-	if err != nil {
-		return err
-	}
-	m := bcfg.(*guerrillaDBAndRedisConfig)
-	g.config = *m
-	return nil
-}
-
-func (g *GuerrillaDBAndRedisBackend) getNumberOfWorkers() int {
-	return g.config.NumberOfWorkers
-}
-
-type redisClient struct {
-	isConnected bool
-	conn        redis.Conn
-	time        int
-}
-
-// compressedData struct will be compressed using zlib when printed via fmt
-type compressedData struct {
-	extraHeaders []byte
-	data         *bytes.Buffer
-	pool         *sync.Pool
-}
-
-// newCompressedData returns a new CompressedData
-func newCompressedData() *compressedData {
-	var p = sync.Pool{
-		New: func() interface{} {
-			var b bytes.Buffer
-			return &b
-		},
-	}
-	return &compressedData{
-		pool: &p,
-	}
-}
-
-// Set the extraheaders and buffer of data to compress
-func (c *compressedData) set(b []byte, d *bytes.Buffer) {
-	c.extraHeaders = b
-	c.data = d
-}
-
-// implement Stringer interface
-func (c *compressedData) String() string {
-	if c.data == nil {
-		return ""
-	}
-	//borrow a buffer form the pool
-	b := c.pool.Get().(*bytes.Buffer)
-	// put back in the pool
-	defer func() {
-		b.Reset()
-		c.pool.Put(b)
-	}()
-
-	var r *bytes.Reader
-	w, _ := zlib.NewWriterLevel(b, zlib.BestSpeed)
-	r = bytes.NewReader(c.extraHeaders)
-	io.Copy(w, r)
-	io.Copy(w, c.data)
-	w.Close()
-	return b.String()
-}
-
-// clear it, without clearing the pool
-func (c *compressedData) clear() {
-	c.extraHeaders = []byte{}
-	c.data = nil
-}
-
-// prepares the sql query with the number of rows that can be batched with it
-func (g *GuerrillaDBAndRedisBackend) prepareInsertQuery(rows int, db *sql.DB) *sql.Stmt {
-	if rows == 0 {
-		panic("rows argument cannot be 0")
-	}
-	if g.cache[rows-1] != nil {
-		return g.cache[rows-1]
-	}
-	sqlstr := "INSERT INTO " + g.config.MysqlTable + " "
-	sqlstr += "(`date`, `to`, `from`, `subject`, `body`, `charset`, `mail`, `spam_score`, `hash`, `content_type`, `recipient`, `has_attach`, `ip_addr`, `return_path`, `is_tls`)"
-	sqlstr += " values "
-	values := "(NOW(), ?, ?, ?, ? , 'UTF-8' , ?, 0, ?, '', ?, 0, ?, ?, ?)"
-	// add more rows
-	comma := ""
-	for i := 0; i < rows; i++ {
-		sqlstr += comma + values
-		if comma == "" {
-			comma = ","
-		}
-	}
-	stmt, sqlErr := db.Prepare(sqlstr)
-	if sqlErr != nil {
-		mainlog.WithError(sqlErr).Fatalf("failed while db.Prepare(INSERT...)")
-	}
-	// cache it
-	g.cache[rows-1] = stmt
-	return stmt
-}
-
-func (g *GuerrillaDBAndRedisBackend) doQuery(c int, db *sql.DB, insertStmt *sql.Stmt, vals *[]interface{}) {
-	var execErr error
-	defer func() {
-		if r := recover(); r != nil {
-			//logln(1, fmt.Sprintf("Recovered in %v", r))
-			mainlog.Error("Recovered form panic:", r, string(debug.Stack()))
-			sum := 0
-			for _, v := range *vals {
-				if str, ok := v.(string); ok {
-					sum = sum + len(str)
-				}
-			}
-			mainlog.Errorf("panic while inserting query [%s] size:%d, err %v", r, sum, execErr)
-			panic("query failed")
-		}
-	}()
-	// prepare the query used to insert when rows reaches batchMax
-	insertStmt = g.prepareInsertQuery(c, db)
-	_, execErr = insertStmt.Exec(*vals...)
-	if execErr != nil {
-		mainlog.WithError(execErr).Error("There was a problem the insert")
-	}
-}
-
-// Batches the rows from the feeder chan in to a single INSERT statement.
-// Execute the batches query when:
-// - number of batched rows reaches a threshold, i.e. count n = threshold
-// - or, no new rows within a certain time, i.e. times out
-// The goroutine can either exit if there's a panic or feeder channel closes
-// it returns feederOk which signals if the feeder chanel was ok (still open) while returning
-// if it feederOk is false, then it means the feeder chanel is closed
-func (g *GuerrillaDBAndRedisBackend) insertQueryBatcher(feeder chan []interface{}, db *sql.DB) (feederOk bool) {
-	// controls shutdown
-	defer g.batcherWg.Done()
-	g.batcherWg.Add(1)
-	// vals is where values are batched to
-	var vals []interface{}
-	// how many rows were batched
-	count := 0
-	// The timer will tick every second.
-	// Interrupting the select clause when there's no data on the feeder channel
-	t := time.NewTimer(GuerrillaDBAndRedisBatchTimeout)
-	// prepare the query used to insert when rows reaches batchMax
-	insertStmt := g.prepareInsertQuery(GuerrillaDBAndRedisBatchMax, db)
-	// inserts executes a batched insert query, clears the vals and resets the count
-	insert := func(c int) {
-		if c > 0 {
-			g.doQuery(c, db, insertStmt, &vals)
-		}
-		vals = nil
-		count = 0
-	}
-	defer func() {
-		if r := recover(); r != nil {
-			mainlog.Error("insertQueryBatcher caught a panic", r)
-		}
-	}()
-	// Keep getting values from feeder and add to batch.
-	// if feeder times out, execute the batched query
-	// otherwise, execute the batched query once it reaches the GuerrillaDBAndRedisBatchMax threshold
-	feederOk = true
-	for {
-		select {
-		// it may panic when reading on a closed feeder channel. feederOK detects if it was closed
-		case row, feederOk := <-feeder:
-			if row == nil {
-				mainlog.Info("Query batchaer exiting")
-				// Insert any remaining rows
-				insert(count)
-				return feederOk
-			}
-			vals = append(vals, row...)
-			count++
-			mainlog.Debug("new feeder row:", row, " cols:", len(row), " count:", count, " worker", workerId)
-			if count >= GuerrillaDBAndRedisBatchMax {
-				insert(GuerrillaDBAndRedisBatchMax)
-			}
-			// stop timer from firing (reset the interrupt)
-			if !t.Stop() {
-				<-t.C
-			}
-			t.Reset(GuerrillaDBAndRedisBatchTimeout)
-		case <-t.C:
-			// anything to insert?
-			if n := len(vals); n > 0 {
-				insert(count)
-			}
-			t.Reset(GuerrillaDBAndRedisBatchTimeout)
-		}
-	}
-}
-
-func trimToLimit(str string, limit int) string {
-	ret := strings.TrimSpace(str)
-	if len(str) > limit {
-		ret = str[:limit]
-	}
-	return ret
-}
-
-var workerId = 0
-
-func (g *GuerrillaDBAndRedisBackend) mysqlConnect() (*sql.DB, error) {
-	conf := mysql.Config{
-		User:         g.config.MysqlUser,
-		Passwd:       g.config.MysqlPass,
-		DBName:       g.config.MysqlDB,
-		Net:          "tcp",
-		Addr:         g.config.MysqlHost,
-		ReadTimeout:  GuerrillaDBAndRedisBatchTimeout + (time.Second * 10),
-		WriteTimeout: GuerrillaDBAndRedisBatchTimeout + (time.Second * 10),
-		Params:       map[string]string{"collation": "utf8_general_ci"},
-	}
-	if db, err := sql.Open("mysql", conf.FormatDSN()); err != nil {
-		mainlog.Error("cannot open mysql", err)
-		return nil, err
-	} else {
-		return db, nil
-	}
-
-}
-
-func (g *GuerrillaDBAndRedisBackend) saveMailWorker(saveMailChan chan *savePayload) {
-	var to, body string
-
-	var redisErr error
-
-	workerId++
-
-	redisClient := &redisClient{}
-	var db *sql.DB
-	var err error
-	db, err = g.mysqlConnect()
-	if err != nil {
-		mainlog.Fatalf("cannot open mysql: %s", err)
-	}
-
-	// start the query SQL batching where we will send data via the feeder channel
-	feeder := make(chan []interface{}, 1)
-	go func() {
-		for {
-			if feederOK := g.insertQueryBatcher(feeder, db); !feederOK {
-				mainlog.Debug("insertQueryBatcher exited")
-				return
-			}
-			// if insertQueryBatcher panics, it can recover and go in again
-			mainlog.Debug("resuming insertQueryBatcher")
-		}
-
-	}()
-
-	defer func() {
-		if r := recover(); r != nil {
-			//recover form closed channel
-			mainlog.Error("panic recovered in saveMailWorker", r)
-		}
-		db.Close()
-		if redisClient.conn != nil {
-			mainlog.Infof("closed redis")
-			redisClient.conn.Close()
-		}
-		// close the feeder & wait for query batcher to exit.
-		close(feeder)
-		g.batcherWg.Wait()
-
-	}()
-	var vals []interface{}
-	data := newCompressedData()
-	//  receives values from the channel repeatedly until it is closed.
-
-	for {
-		payload := <-saveMailChan
-		if payload == nil {
-			mainlog.Debug("No more saveMailChan payload")
-			return
-		}
-		mainlog.Debug("Got mail from chan", payload.mail.RemoteAddress)
-		to = trimToLimit(strings.TrimSpace(payload.recipient.User)+"@"+g.config.PrimaryHost, 255)
-		payload.mail.Helo = trimToLimit(payload.mail.Helo, 255)
-		payload.recipient.Host = trimToLimit(payload.recipient.Host, 255)
-		ts := fmt.Sprintf("%d", time.Now().UnixNano())
-		payload.mail.ParseHeaders()
-		hash := MD5Hex(
-			to,
-			payload.mail.MailFrom.String(),
-			payload.mail.Subject,
-			ts)
-		// Add extra headers
-		var addHead string
-		addHead += "Delivered-To: " + to + "\r\n"
-		addHead += "Received: from " + payload.mail.Helo + " (" + payload.mail.Helo + "  [" + payload.mail.RemoteAddress + "])\r\n"
-		addHead += "	by " + payload.recipient.Host + " with SMTP id " + hash + "@" + payload.recipient.Host + ";\r\n"
-		addHead += "	" + time.Now().Format(time.RFC1123Z) + "\r\n"
-
-		// data will be compressed when printed, with addHead added to beginning
-
-		data.set([]byte(addHead), &payload.mail.Data)
-		body = "gzencode"
-
-		// data will be written to redis - it implements the Stringer interface, redigo uses fmt to
-		// print the data to redis.
-
-		redisErr = redisClient.redisConnection(g.config.RedisInterface)
-		if redisErr == nil {
-			_, doErr := redisClient.conn.Do("SETEX", hash, g.config.RedisExpireSeconds, data)
-			if doErr == nil {
-				body = "redis" // the backend system will know to look in redis for the message data
-				data.clear()   // blank
-			}
-		} else {
-			mainlog.WithError(redisErr).Warn("Error while connecting redis")
-		}
-
-		vals = []interface{}{} // clear the vals
-		vals = append(vals,
-			trimToLimit(to, 255),
-			trimToLimit(payload.mail.MailFrom.String(), 255),
-			trimToLimit(payload.mail.Subject, 255),
-			body,
-			data.String(),
-			hash,
-			trimToLimit(to, 255),
-			payload.mail.RemoteAddress,
-			trimToLimit(payload.mail.MailFrom.String(), 255),
-			payload.mail.TLS)
-		feeder <- vals
-		payload.savedNotify <- &saveStatus{nil, hash}
-
-	}
-}
-
-func (c *redisClient) redisConnection(redisInterface string) (err error) {
-	if c.isConnected == false {
-		c.conn, err = redis.Dial("tcp", redisInterface)
-		if err != nil {
-			// handle error
-			return err
-		}
-		c.isConnected = true
-	}
-	return nil
-}
-
-// test database connection settings
-func (g *GuerrillaDBAndRedisBackend) testSettings() (err error) {
-
-	var db *sql.DB
-
-	if db, err = g.mysqlConnect(); err != nil {
-		err = fmt.Errorf("MySql cannot connect, check your settings: %s", err)
-	} else {
-		db.Close()
-	}
-
-	redisClient := &redisClient{}
-	if redisErr := redisClient.redisConnection(g.config.RedisInterface); redisErr != nil {
-		err = fmt.Errorf("Redis cannot connect, check your settings: %s", redisErr)
-	}
-
-	return
-}

+ 107 - 0
backends/p_compressor.go

@@ -0,0 +1,107 @@
+package backends
+
+import (
+	"bytes"
+	"compress/zlib"
+	"github.com/flashmob/go-guerrilla/mail"
+	"io"
+	"sync"
+)
+
+// ----------------------------------------------------------------------------------
+// Processor Name: compressor
+// ----------------------------------------------------------------------------------
+// Description   : Compress the e.Data (email data) and e.DeliveryHeader together
+// ----------------------------------------------------------------------------------
+// Config Options: None
+// --------------:-------------------------------------------------------------------
+// Input         : e.Data, e.DeliveryHeader generated by Header() processor
+// ----------------------------------------------------------------------------------
+// Output        : sets the pointer to a compressor in e.Info["zlib-compressor"]
+//               : to write the compressed data, simply use fmt to print as a string,
+//               : eg. fmt.Println("%s", e.Info["zlib-compressor"])
+//               : or just call the String() func .Info["zlib-compressor"].String()
+//               : Note that it can only be outputted once. It destroys the buffer
+//               : after being printed
+// ----------------------------------------------------------------------------------
+func init() {
+	processors["compressor"] = func() Decorator {
+		return Compressor()
+	}
+}
+
+// compressedData struct will be compressed using zlib when printed via fmt
+type compressor struct {
+	extraHeaders []byte
+	data         *bytes.Buffer
+	// the pool is used to recycle buffers to ease up on the garbage collector
+	pool *sync.Pool
+}
+
+// newCompressedData returns a new CompressedData
+func newCompressor() *compressor {
+	// grab it from the pool
+	var p = sync.Pool{
+		// if not available, then create a new one
+		New: func() interface{} {
+			var b bytes.Buffer
+			return &b
+		},
+	}
+	return &compressor{
+		pool: &p,
+	}
+}
+
+// Set the extraheaders and buffer of data to compress
+func (c *compressor) set(b []byte, d *bytes.Buffer) {
+	c.extraHeaders = b
+	c.data = d
+}
+
+// String implements the Stringer interface.
+// Can only be called once!
+// This is because the compression buffer will be reset and compressor will be returned to the pool
+func (c *compressor) String() string {
+	if c.data == nil {
+		return ""
+	}
+	//borrow a buffer form the pool
+	b := c.pool.Get().(*bytes.Buffer)
+	// put back in the pool
+	defer func() {
+		b.Reset()
+		c.pool.Put(b)
+	}()
+
+	var r *bytes.Reader
+	w, _ := zlib.NewWriterLevel(b, zlib.BestSpeed)
+	r = bytes.NewReader(c.extraHeaders)
+	io.Copy(w, r)
+	io.Copy(w, c.data)
+	w.Close()
+	return b.String()
+}
+
+// clear it, without clearing the pool
+func (c *compressor) clear() {
+	c.extraHeaders = []byte{}
+	c.data = nil
+}
+
+func Compressor() Decorator {
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+			if task == TaskSaveMail {
+				compressor := newCompressor()
+				compressor.set([]byte(e.DeliveryHeader), &e.Data)
+				// put the pointer in there for other processors to use later in the line
+				e.Values["zlib-compressor"] = compressor
+				// continue to the next Processor in the decorator stack
+				return p.Process(e, task)
+			} else {
+				return p.Process(e, task)
+			}
+		})
+	}
+}

+ 55 - 0
backends/p_debugger.go

@@ -0,0 +1,55 @@
+package backends
+
+import (
+	"github.com/flashmob/go-guerrilla/mail"
+	"strings"
+)
+
+// ----------------------------------------------------------------------------------
+// Processor Name: debugger
+// ----------------------------------------------------------------------------------
+// Description   : Log received emails
+// ----------------------------------------------------------------------------------
+// Config Options: log_received_mails bool - log if true
+// --------------:-------------------------------------------------------------------
+// Input         : e.MailFrom, e.RcptTo, e.Header
+// ----------------------------------------------------------------------------------
+// Output        : none (only output to the log if enabled)
+// ----------------------------------------------------------------------------------
+func init() {
+	processors[strings.ToLower(defaultProcessor)] = func() Decorator {
+		return Debugger()
+	}
+}
+
+type debuggerConfig struct {
+	LogReceivedMails bool `json:"log_received_mails"`
+}
+
+func Debugger() Decorator {
+	var config *debuggerConfig
+	initFunc := InitializeWith(func(backendConfig BackendConfig) error {
+		configType := BaseConfig(&debuggerConfig{})
+		bcfg, err := Svc.ExtractConfig(backendConfig, configType)
+		if err != nil {
+			return err
+		}
+		config = bcfg.(*debuggerConfig)
+		return nil
+	})
+	Svc.AddInitializer(initFunc)
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+			if task == TaskSaveMail {
+				if config.LogReceivedMails {
+					Log().Infof("Mail from: %s / to: %v", e.MailFrom.String(), e.RcptTo)
+					Log().Info("Headers are:", e.Header)
+				}
+				// continue to the next Processor in the decorator stack
+				return p.Process(e, task)
+			} else {
+				return p.Process(e, task)
+			}
+		})
+	}
+}

+ 482 - 0
backends/p_guerrilla_db_redis.go

@@ -0,0 +1,482 @@
+package backends
+
+import (
+	"bytes"
+	"compress/zlib"
+	"database/sql"
+	"fmt"
+	"github.com/flashmob/go-guerrilla/mail"
+	"github.com/garyburd/redigo/redis"
+	"github.com/go-sql-driver/mysql"
+	"io"
+	"math/rand"
+	"runtime/debug"
+	"strings"
+	"sync"
+	"time"
+)
+
+// ----------------------------------------------------------------------------------
+// Processor Name: GuerrillaRedsDB
+// ----------------------------------------------------------------------------------
+// Description   : Saves the body to redis, meta data to mysql. Example only.
+//               : Limitation: it doesn't save multiple recipients or validate them
+// ----------------------------------------------------------------------------------
+// Config Options: ...
+// --------------:-------------------------------------------------------------------
+// Input         : envelope
+// ----------------------------------------------------------------------------------
+// Output        :
+// ----------------------------------------------------------------------------------
+func init() {
+	processors["guerrillaredisdb"] = func() Decorator {
+		return GuerrillaDbReddis()
+	}
+}
+
+var queryBatcherId = 0
+
+// how many rows to batch at a time
+const GuerrillaDBAndRedisBatchMax = 50
+
+// tick on every...
+const GuerrillaDBAndRedisBatchTimeout = time.Second * 3
+
+type GuerrillaDBAndRedisBackend struct {
+	config    *guerrillaDBAndRedisConfig
+	batcherWg sync.WaitGroup
+	// cache prepared queries
+	cache stmtCache
+
+	batcherStoppers []chan bool
+}
+
+// statement cache. It's an array, not slice
+type stmtCache [GuerrillaDBAndRedisBatchMax]*sql.Stmt
+
+type guerrillaDBAndRedisConfig struct {
+	NumberOfWorkers    int    `json:"save_workers_size"`
+	MysqlTable         string `json:"mail_table"`
+	MysqlDB            string `json:"mysql_db"`
+	MysqlHost          string `json:"mysql_host"`
+	MysqlPass          string `json:"mysql_pass"`
+	MysqlUser          string `json:"mysql_user"`
+	RedisExpireSeconds int    `json:"redis_expire_seconds"`
+	RedisInterface     string `json:"redis_interface"`
+	PrimaryHost        string `json:"primary_mail_host"`
+	BatchTimeout       int    `json:"redis_mysql_batch_timeout,omitempty"`
+}
+
+// Load the backend config for the backend. It has already been unmarshalled
+// from the main config file 'backend' config "backend_config"
+// Now we need to convert each type and copy into the guerrillaDBAndRedisConfig struct
+func (g *GuerrillaDBAndRedisBackend) loadConfig(backendConfig BackendConfig) (err error) {
+	configType := BaseConfig(&guerrillaDBAndRedisConfig{})
+	bcfg, err := Svc.ExtractConfig(backendConfig, configType)
+	if err != nil {
+		return err
+	}
+	m := bcfg.(*guerrillaDBAndRedisConfig)
+	g.config = m
+	return nil
+}
+
+func (g *GuerrillaDBAndRedisBackend) getNumberOfWorkers() int {
+	return g.config.NumberOfWorkers
+}
+
+type redisClient struct {
+	isConnected bool
+	conn        redis.Conn
+	time        int
+}
+
+// compressedData struct will be compressed using zlib when printed via fmt
+type compressedData struct {
+	extraHeaders []byte
+	data         *bytes.Buffer
+	pool         *sync.Pool
+}
+
+// newCompressedData returns a new CompressedData
+func newCompressedData() *compressedData {
+	var p = sync.Pool{
+		New: func() interface{} {
+			var b bytes.Buffer
+			return &b
+		},
+	}
+	return &compressedData{
+		pool: &p,
+	}
+}
+
+// Set the extraheaders and buffer of data to compress
+func (c *compressedData) set(b []byte, d *bytes.Buffer) {
+	c.extraHeaders = b
+	c.data = d
+}
+
+// implement Stringer interface
+func (c *compressedData) String() string {
+	if c.data == nil {
+		return ""
+	}
+	//borrow a buffer form the pool
+	b := c.pool.Get().(*bytes.Buffer)
+	// put back in the pool
+	defer func() {
+		b.Reset()
+		c.pool.Put(b)
+	}()
+
+	var r *bytes.Reader
+	w, _ := zlib.NewWriterLevel(b, zlib.BestSpeed)
+	r = bytes.NewReader(c.extraHeaders)
+	io.Copy(w, r)
+	io.Copy(w, c.data)
+	w.Close()
+	return b.String()
+}
+
+// clear it, without clearing the pool
+func (c *compressedData) clear() {
+	c.extraHeaders = []byte{}
+	c.data = nil
+}
+
+// prepares the sql query with the number of rows that can be batched with it
+func (g *GuerrillaDBAndRedisBackend) prepareInsertQuery(rows int, db *sql.DB) *sql.Stmt {
+	if rows == 0 {
+		panic("rows argument cannot be 0")
+	}
+	if g.cache[rows-1] != nil {
+		return g.cache[rows-1]
+	}
+	sqlstr := "INSERT INTO " + g.config.MysqlTable + " "
+	sqlstr += "(`date`, `to`, `from`, `subject`, `body`, `charset`, `mail`, `spam_score`, `hash`, `content_type`, `recipient`, `has_attach`, `ip_addr`, `return_path`, `is_tls`)"
+	sqlstr += " values "
+	values := "(NOW(), ?, ?, ?, ? , 'UTF-8' , ?, 0, ?, '', ?, 0, ?, ?, ?)"
+	// add more rows
+	comma := ""
+	for i := 0; i < rows; i++ {
+		sqlstr += comma + values
+		if comma == "" {
+			comma = ","
+		}
+	}
+	stmt, sqlErr := db.Prepare(sqlstr)
+	if sqlErr != nil {
+		Log().WithError(sqlErr).Fatalf("failed while db.Prepare(INSERT...)")
+	}
+	// cache it
+	g.cache[rows-1] = stmt
+	return stmt
+}
+
+func (g *GuerrillaDBAndRedisBackend) doQuery(c int, db *sql.DB, insertStmt *sql.Stmt, vals *[]interface{}) error {
+	var execErr error
+	defer func() {
+		if r := recover(); r != nil {
+			//logln(1, fmt.Sprintf("Recovered in %v", r))
+			Log().Error("Recovered form panic:", r, string(debug.Stack()))
+			sum := 0
+			for _, v := range *vals {
+				if str, ok := v.(string); ok {
+					sum = sum + len(str)
+				}
+			}
+			Log().Errorf("panic while inserting query [%s] size:%d, err %v", r, sum, execErr)
+			panic("query failed")
+		}
+	}()
+	// prepare the query used to insert when rows reaches batchMax
+	insertStmt = g.prepareInsertQuery(c, db)
+	_, execErr = insertStmt.Exec(*vals...)
+	//if rand.Intn(2) == 1 {
+	//	return errors.New("uggabooka")
+	//}
+	if execErr != nil {
+		Log().WithError(execErr).Error("There was a problem the insert")
+	}
+	return execErr
+}
+
+// Batches the rows from the feeder chan in to a single INSERT statement.
+// Execute the batches query when:
+// - number of batched rows reaches a threshold, i.e. count n = threshold
+// - or, no new rows within a certain time, i.e. times out
+// The goroutine can either exit if there's a panic or feeder channel closes
+// it returns feederOk which signals if the feeder chanel was ok (still open) while returning
+// if it feederOk is false, then it means the feeder chanel is closed
+func (g *GuerrillaDBAndRedisBackend) insertQueryBatcher(
+	feeder feedChan,
+	db *sql.DB,
+	batcherId int,
+	stop chan bool) (feederOk bool) {
+
+	// controls shutdown
+	defer g.batcherWg.Done()
+	g.batcherWg.Add(1)
+	// vals is where values are batched to
+	var vals []interface{}
+	// how many rows were batched
+	count := 0
+	// The timer will tick x seconds.
+	// Interrupting the select clause when there's no data on the feeder channel
+	timeo := GuerrillaDBAndRedisBatchTimeout
+	if g.config.BatchTimeout > 0 {
+		timeo = time.Duration(g.config.BatchTimeout)
+	}
+	t := time.NewTimer(timeo)
+	// prepare the query used to insert when rows reaches batchMax
+	insertStmt := g.prepareInsertQuery(GuerrillaDBAndRedisBatchMax, db)
+	// inserts executes a batched insert query, clears the vals and resets the count
+	inserter := func(c int) {
+		if c > 0 {
+			err := g.doQuery(c, db, insertStmt, &vals)
+			if err != nil {
+				// maybe connection prob?
+				// retry the sql query
+				attempts := 3
+				for i := 0; i < attempts; i++ {
+					Log().Infof("retrying query query rows[%c] ", c)
+					time.Sleep(time.Second)
+					err = g.doQuery(c, db, insertStmt, &vals)
+					if err == nil {
+						continue
+					}
+				}
+			}
+		}
+		vals = nil
+		count = 0
+	}
+	rand.Seed(time.Now().UnixNano())
+	defer func() {
+		if r := recover(); r != nil {
+			Log().Error("insertQueryBatcher caught a panic", r, string(debug.Stack()))
+		}
+	}()
+	// Keep getting values from feeder and add to batch.
+	// if feeder times out, execute the batched query
+	// otherwise, execute the batched query once it reaches the GuerrillaDBAndRedisBatchMax threshold
+	feederOk = true
+	for {
+		select {
+		// it may panic when reading on a closed feeder channel. feederOK detects if it was closed
+		case <-stop:
+			Log().Infof("MySQL query batcher stopped (#%d)", batcherId)
+			// Insert any remaining rows
+			inserter(count)
+			feederOk = false
+			close(feeder)
+			return
+		case row := <-feeder:
+
+			vals = append(vals, row...)
+			count++
+			Log().Debug("new feeder row:", row, " cols:", len(row), " count:", count, " worker", batcherId)
+			if count >= GuerrillaDBAndRedisBatchMax {
+				inserter(GuerrillaDBAndRedisBatchMax)
+			}
+			// stop timer from firing (reset the interrupt)
+			if !t.Stop() {
+				// darin the timer
+				<-t.C
+			}
+			t.Reset(timeo)
+		case <-t.C:
+			// anything to insert?
+			if n := len(vals); n > 0 {
+				inserter(count)
+			}
+			t.Reset(timeo)
+		}
+	}
+}
+
+func trimToLimit(str string, limit int) string {
+	ret := strings.TrimSpace(str)
+	if len(str) > limit {
+		ret = str[:limit]
+	}
+	return ret
+}
+
+func (g *GuerrillaDBAndRedisBackend) mysqlConnect() (*sql.DB, error) {
+	tOut := GuerrillaDBAndRedisBatchTimeout
+	if g.config.BatchTimeout > 0 {
+		tOut = time.Duration(g.config.BatchTimeout)
+	}
+	tOut += 10
+	// don't go to 30 sec or more
+	if tOut >= 30 {
+		tOut = 29
+	}
+	conf := mysql.Config{
+		User:         g.config.MysqlUser,
+		Passwd:       g.config.MysqlPass,
+		DBName:       g.config.MysqlDB,
+		Net:          "tcp",
+		Addr:         g.config.MysqlHost,
+		ReadTimeout:  tOut * time.Second,
+		WriteTimeout: tOut * time.Second,
+		Params:       map[string]string{"collation": "utf8_general_ci"},
+	}
+	if db, err := sql.Open("mysql", conf.FormatDSN()); err != nil {
+		Log().Error("cannot open mysql", err, "]")
+		return nil, err
+	} else {
+		// do we have access?
+		_, err = db.Query("SELECT mail_id FROM " + g.config.MysqlTable + " LIMIT 1")
+		if err != nil {
+			Log().Error("cannot select table", err)
+			return nil, err
+		}
+		return db, nil
+	}
+}
+
+func (c *redisClient) redisConnection(redisInterface string) (err error) {
+	if c.isConnected == false {
+		c.conn, err = redis.Dial("tcp", redisInterface)
+		if err != nil {
+			// handle error
+			return err
+		}
+		c.isConnected = true
+	}
+	return nil
+}
+
+type feedChan chan []interface{}
+
+// GuerrillaDbReddis is a specialized processor for Guerrilla mail. It is here as an example.
+// It's an example of a 'monolithic' processor.
+func GuerrillaDbReddis() Decorator {
+
+	g := GuerrillaDBAndRedisBackend{}
+	redisClient := &redisClient{}
+
+	var db *sql.DB
+	var to, body string
+
+	var redisErr error
+
+	var feeders []feedChan
+
+	g.batcherStoppers = make([]chan bool, 0)
+
+	Svc.AddInitializer(InitializeWith(func(backendConfig BackendConfig) error {
+
+		configType := BaseConfig(&guerrillaDBAndRedisConfig{})
+		bcfg, err := Svc.ExtractConfig(backendConfig, configType)
+		if err != nil {
+			return err
+		}
+		g.config = bcfg.(*guerrillaDBAndRedisConfig)
+		db, err = g.mysqlConnect()
+		if err != nil {
+			return err
+		}
+		queryBatcherId++
+		// start the query SQL batching where we will send data via the feeder channel
+		stop := make(chan bool)
+		feeder := make(feedChan, 1)
+		go func(qbID int, stop chan bool) {
+			// we loop so that if insertQueryBatcher panics, it can recover and go in again
+			for {
+				if feederOK := g.insertQueryBatcher(feeder, db, qbID, stop); !feederOK {
+					Log().Debugf("insertQueryBatcher exited (#%d)", qbID)
+					return
+				}
+				Log().Debug("resuming insertQueryBatcher")
+			}
+		}(queryBatcherId, stop)
+		g.batcherStoppers = append(g.batcherStoppers, stop)
+		feeders = append(feeders, feeder)
+		return nil
+	}))
+
+	Svc.AddShutdowner(ShutdownWith(func() error {
+		db.Close()
+		Log().Infof("closed mysql")
+		if redisClient.conn != nil {
+			Log().Infof("closed redis")
+			redisClient.conn.Close()
+		}
+		// send a close signal to all query batchers to exit.
+		for i := range g.batcherStoppers {
+			g.batcherStoppers[i] <- true
+		}
+		g.batcherWg.Wait()
+
+		return nil
+	}))
+
+	var vals []interface{}
+	data := newCompressedData()
+
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+			if task == TaskSaveMail {
+				Log().Debug("Got mail from chan,", e.RemoteIP)
+				to = trimToLimit(strings.TrimSpace(e.RcptTo[0].User)+"@"+g.config.PrimaryHost, 255)
+				e.Helo = trimToLimit(e.Helo, 255)
+				e.RcptTo[0].Host = trimToLimit(e.RcptTo[0].Host, 255)
+				ts := fmt.Sprintf("%d", time.Now().UnixNano())
+				e.ParseHeaders()
+				hash := MD5Hex(
+					to,
+					e.MailFrom.String(),
+					e.Subject,
+					ts)
+				// Add extra headers
+				var addHead string
+				addHead += "Delivered-To: " + to + "\r\n"
+				addHead += "Received: from " + e.Helo + " (" + e.Helo + "  [" + e.RemoteIP + "])\r\n"
+				addHead += "	by " + e.RcptTo[0].Host + " with SMTP id " + hash + "@" + e.RcptTo[0].Host + ";\r\n"
+				addHead += "	" + time.Now().Format(time.RFC1123Z) + "\r\n"
+
+				// data will be compressed when printed, with addHead added to beginning
+
+				data.set([]byte(addHead), &e.Data)
+				body = "gzencode"
+
+				// data will be written to redis - it implements the Stringer interface, redigo uses fmt to
+				// print the data to redis.
+
+				redisErr = redisClient.redisConnection(g.config.RedisInterface)
+				if redisErr == nil {
+					_, doErr := redisClient.conn.Do("SETEX", hash, g.config.RedisExpireSeconds, data)
+					if doErr == nil {
+						body = "redis" // the backend system will know to look in redis for the message data
+						data.clear()   // blank
+					}
+				} else {
+					Log().WithError(redisErr).Warn("Error while connecting redis")
+				}
+
+				vals = []interface{}{} // clear the vals
+				vals = append(vals,
+					trimToLimit(to, 255),
+					trimToLimit(e.MailFrom.String(), 255),
+					trimToLimit(e.Subject, 255),
+					body,
+					data.String(),
+					hash,
+					trimToLimit(to, 255),
+					e.RemoteIP,
+					trimToLimit(e.MailFrom.String(), 255),
+					e.TLS)
+				// give the values to a random query batcher
+				feeders[rand.Intn(len(feeders))] <- vals
+				return p.Process(e, task)
+
+			} else {
+				return p.Process(e, task)
+			}
+		})
+	}
+}

+ 0 - 0
backends/guerrilla_db_redis_test.go → backends/p_guerrilla_db_redis_test.go


+ 58 - 0
backends/p_hasher.go

@@ -0,0 +1,58 @@
+package backends
+
+import (
+	"crypto/md5"
+	"fmt"
+	"io"
+	"strings"
+	"time"
+
+	"github.com/flashmob/go-guerrilla/mail"
+)
+
+// ----------------------------------------------------------------------------------
+// Processor Name: hasher
+// ----------------------------------------------------------------------------------
+// Description   : Generates a unique md5 checksum id for an email
+// ----------------------------------------------------------------------------------
+// Config Options: None
+// --------------:-------------------------------------------------------------------
+// Input         : e.MailFrom, e.Subject, e.RcptTo
+//               : assuming e.Subject was generated by "headersparser" processor
+// ----------------------------------------------------------------------------------
+// Output        : Checksum stored in e.Hash
+// ----------------------------------------------------------------------------------
+func init() {
+	processors["hasher"] = func() Decorator {
+		return Hasher()
+	}
+}
+
+// The hasher decorator computes a hash of the email for each recipient
+// It appends the hashes to envelope's Hashes slice.
+func Hasher() Decorator {
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+
+			if task == TaskSaveMail {
+				// base hash, use subject from and timestamp-nano
+				h := md5.New()
+				ts := fmt.Sprintf("%d", time.Now().UnixNano())
+				io.Copy(h, strings.NewReader(e.MailFrom.String()))
+				io.Copy(h, strings.NewReader(e.Subject))
+				io.Copy(h, strings.NewReader(ts))
+				// using the base hash, calculate a unique hash for each recipient
+				for i := range e.RcptTo {
+					h2 := h
+					io.Copy(h2, strings.NewReader(e.RcptTo[i].String()))
+					sum := h2.Sum([]byte{})
+					e.Hashes = append(e.Hashes, fmt.Sprintf("%x", sum))
+				}
+				return p.Process(e, task)
+			} else {
+				return p.Process(e, task)
+			}
+
+		})
+	}
+}

+ 74 - 0
backends/p_header.go

@@ -0,0 +1,74 @@
+package backends
+
+import (
+	"github.com/flashmob/go-guerrilla/mail"
+	"strings"
+	"time"
+)
+
+type HeaderConfig struct {
+	PrimaryHost string `json:"primary_mail_host"`
+}
+
+// ----------------------------------------------------------------------------------
+// Processor Name: header
+// ----------------------------------------------------------------------------------
+// Description   : Adds delivery information headers to e.DeliveryHeader
+// ----------------------------------------------------------------------------------
+// Config Options: none
+// --------------:-------------------------------------------------------------------
+// Input         : e.Helo
+//               : e.RemoteAddress
+//               : e.RcptTo
+//               : e.Hashes
+// ----------------------------------------------------------------------------------
+// Output        : Sets e.DeliveryHeader with additional delivery info
+// ----------------------------------------------------------------------------------
+func init() {
+	processors["header"] = func() Decorator {
+		return Header()
+	}
+}
+
+// Generate the MTA delivery header
+// Sets e.DeliveryHeader part of the envelope with the generated header
+func Header() Decorator {
+
+	var config *HeaderConfig
+
+	Svc.AddInitializer(InitializeWith(func(backendConfig BackendConfig) error {
+		configType := BaseConfig(&HeaderConfig{})
+		bcfg, err := Svc.ExtractConfig(backendConfig, configType)
+		if err != nil {
+			return err
+		}
+		config = bcfg.(*HeaderConfig)
+		return nil
+	}))
+
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+			if task == TaskSaveMail {
+				to := strings.TrimSpace(e.RcptTo[0].User) + "@" + config.PrimaryHost
+				hash := "unknown"
+				if len(e.Hashes) > 0 {
+					hash = e.Hashes[0]
+				}
+				var addHead string
+				addHead += "Delivered-To: " + to + "\n"
+				addHead += "Received: from " + e.Helo + " (" + e.Helo + "  [" + e.RemoteIP + "])\n"
+				if len(e.RcptTo) > 0 {
+					addHead += "	by " + e.RcptTo[0].Host + " with SMTP id " + hash + "@" + e.RcptTo[0].Host + ";\n"
+				}
+				addHead += "	" + time.Now().Format(time.RFC1123Z) + "\n"
+				// save the result
+				e.DeliveryHeader = addHead
+				// next processor
+				return p.Process(e, task)
+
+			} else {
+				return p.Process(e, task)
+			}
+		})
+	}
+}

+ 37 - 0
backends/p_headers_parser.go

@@ -0,0 +1,37 @@
+package backends
+
+import (
+	"github.com/flashmob/go-guerrilla/mail"
+)
+
+// ----------------------------------------------------------------------------------
+// Processor Name: headersparser
+// ----------------------------------------------------------------------------------
+// Description   : Parses the header using e.ParseHeaders()
+// ----------------------------------------------------------------------------------
+// Config Options: none
+// --------------:-------------------------------------------------------------------
+// Input         : envelope
+// ----------------------------------------------------------------------------------
+// Output        : Headers will be populated in e.Header
+// ----------------------------------------------------------------------------------
+func init() {
+	processors["headersparser"] = func() Decorator {
+		return HeadersParser()
+	}
+}
+
+func HeadersParser() Decorator {
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+			if task == TaskSaveMail {
+				e.ParseHeaders()
+				// next processor
+				return p.Process(e, task)
+			} else {
+				// next processor
+				return p.Process(e, task)
+			}
+		})
+	}
+}

+ 304 - 0
backends/p_mysql.go

@@ -0,0 +1,304 @@
+package backends
+
+import (
+	"database/sql"
+	"fmt"
+	"strings"
+	"time"
+
+	"github.com/flashmob/go-guerrilla/mail"
+	"github.com/go-sql-driver/mysql"
+
+	"github.com/flashmob/go-guerrilla/response"
+	"math/big"
+	"net"
+	"runtime/debug"
+)
+
+// ----------------------------------------------------------------------------------
+// Processor Name: mysql
+// ----------------------------------------------------------------------------------
+// Description   : Saves the e.Data (email data) and e.DeliveryHeader together in mysql
+//               : using the hash generated by the "hash" processor and stored in
+//               : e.Hashes
+// ----------------------------------------------------------------------------------
+// Config Options: mail_table string - mysql table name
+//               : mysql_db string - mysql database name
+//               : mysql_host string - mysql host name, eg. 127.0.0.1
+//               : mysql_pass string - mysql password
+//               : mysql_user string - mysql username
+//               : primary_mail_host string - primary host name
+// --------------:-------------------------------------------------------------------
+// Input         : e.Data
+//               : e.DeliveryHeader generated by ParseHeader() processor
+//               : e.MailFrom
+//               : e.Subject - generated by by ParseHeader() processor
+// ----------------------------------------------------------------------------------
+// Output        : Sets e.QueuedId with the first item fromHashes[0]
+// ----------------------------------------------------------------------------------
+func init() {
+	processors["mysql"] = func() Decorator {
+		return MySql()
+	}
+}
+
+const procMySQLReadTimeout = time.Second * 10
+const procMySQLWriteTimeout = time.Second * 10
+
+type MysqlProcessorConfig struct {
+	MysqlTable  string `json:"mysql_mail_table"`
+	MysqlDB     string `json:"mysql_db"`
+	MysqlHost   string `json:"mysql_host"`
+	MysqlPass   string `json:"mysql_pass"`
+	MysqlUser   string `json:"mysql_user"`
+	PrimaryHost string `json:"primary_mail_host"`
+}
+
+type MysqlProcessor struct {
+	cache  stmtCache
+	config *MysqlProcessorConfig
+}
+
+func (m *MysqlProcessor) connect(config *MysqlProcessorConfig) (*sql.DB, error) {
+	var db *sql.DB
+	var err error
+	conf := mysql.Config{
+		User:         config.MysqlUser,
+		Passwd:       config.MysqlPass,
+		DBName:       config.MysqlDB,
+		Net:          "tcp",
+		Addr:         config.MysqlHost,
+		ReadTimeout:  procMySQLReadTimeout,
+		WriteTimeout: procMySQLWriteTimeout,
+		Params:       map[string]string{"collation": "utf8_general_ci"},
+	}
+	if db, err = sql.Open("mysql", conf.FormatDSN()); err != nil {
+		Log().Error("cannot open mysql", err)
+		return nil, err
+	}
+	// do we have permission to access the table?
+	_, err = db.Query("SELECT mail_id FROM " + m.config.MysqlTable + " LIMIT 1")
+	if err != nil {
+		//Log().Error("cannot select table", err)
+		return nil, err
+	}
+	Log().Info("connected to mysql on tcp ", config.MysqlHost)
+	return db, err
+}
+
+// prepares the sql query with the number of rows that can be batched with it
+func (g *MysqlProcessor) prepareInsertQuery(rows int, db *sql.DB) *sql.Stmt {
+	if rows == 0 {
+		panic("rows argument cannot be 0")
+	}
+	if g.cache[rows-1] != nil {
+		return g.cache[rows-1]
+	}
+	sqlstr := "INSERT INTO " + g.config.MysqlTable + " "
+	sqlstr += "(`date`, `to`, `from`, `subject`, `body`,  `mail`, `spam_score`, "
+	sqlstr += "`hash`, `content_type`, `recipient`, `has_attach`, `ip_addr`, "
+	sqlstr += "`return_path`, `is_tls`, `message_id`, `reply_to`, `sender`)"
+	sqlstr += " VALUES "
+	values := "(NOW(), ?, ?, ?, ? , ?, 0, ?, ?, ?, 0, ?, ?, ?, ?, ?, ?)"
+	// add more rows
+	comma := ""
+	for i := 0; i < rows; i++ {
+		sqlstr += comma + values
+		if comma == "" {
+			comma = ","
+		}
+	}
+	stmt, sqlErr := db.Prepare(sqlstr)
+	if sqlErr != nil {
+		Log().WithError(sqlErr).Panic("failed while db.Prepare(INSERT...)")
+	}
+	// cache it
+	g.cache[rows-1] = stmt
+	return stmt
+}
+
+func (g *MysqlProcessor) doQuery(c int, db *sql.DB, insertStmt *sql.Stmt, vals *[]interface{}) (execErr error) {
+	defer func() {
+		if r := recover(); r != nil {
+			Log().Error("Recovered form panic:", r, string(debug.Stack()))
+			sum := 0
+			for _, v := range *vals {
+				if str, ok := v.(string); ok {
+					sum = sum + len(str)
+				}
+			}
+			Log().Errorf("panic while inserting query [%s] size:%d, err %v", r, sum, execErr)
+			panic("query failed")
+		}
+	}()
+	// prepare the query used to insert when rows reaches batchMax
+	insertStmt = g.prepareInsertQuery(c, db)
+	_, execErr = insertStmt.Exec(*vals...)
+	if execErr != nil {
+		Log().WithError(execErr).Error("There was a problem the insert")
+	}
+	return
+}
+
+// for storing ip addresses in the ip_addr column
+func (g *MysqlProcessor) ip2bint(ip string) *big.Int {
+	bint := big.NewInt(0)
+	addr := net.ParseIP(ip)
+	if strings.Index(ip, "::") > 0 {
+		bint.SetBytes(addr.To16())
+	} else {
+		bint.SetBytes(addr.To4())
+	}
+	return bint
+}
+
+func (g *MysqlProcessor) fillAddressFromHeader(e *mail.Envelope, headerKey string) string {
+	if v, ok := e.Header[headerKey]; ok {
+		addr, err := mail.NewAddress(v[0])
+		if err != nil {
+			return ""
+		}
+		return addr.String()
+	}
+	return ""
+}
+
+func MySql() Decorator {
+
+	var config *MysqlProcessorConfig
+	var vals []interface{}
+	var db *sql.DB
+	m := &MysqlProcessor{}
+
+	// open the database connection (it will also check if we can select the table)
+	Svc.AddInitializer(InitializeWith(func(backendConfig BackendConfig) error {
+		configType := BaseConfig(&MysqlProcessorConfig{})
+		bcfg, err := Svc.ExtractConfig(backendConfig, configType)
+		if err != nil {
+			return err
+		}
+		config = bcfg.(*MysqlProcessorConfig)
+		m.config = config
+		db, err = m.connect(config)
+		if err != nil {
+			return err
+		}
+		return nil
+	}))
+
+	// shutdown will close the database connection
+	Svc.AddShutdowner(ShutdownWith(func() error {
+		if db != nil {
+			return db.Close()
+		}
+		return nil
+	}))
+
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+
+			if task == TaskSaveMail {
+				var to, body string
+
+				hash := ""
+				if len(e.Hashes) > 0 {
+					// if saved in redis, hash will be the redis key
+					hash = e.Hashes[0]
+					e.QueuedId = e.Hashes[0]
+				}
+
+				var co *compressor
+				// a compressor was set by the Compress processor
+				if c, ok := e.Values["zlib-compressor"]; ok {
+					body = "gzip"
+					co = c.(*compressor)
+				}
+				// was saved in redis by the Redis processor
+				if _, ok := e.Values["redis"]; ok {
+					body = "redis"
+				}
+
+				for i := range e.RcptTo {
+
+					// use the To header, otherwise rcpt to
+					to = trimToLimit(m.fillAddressFromHeader(e, "To"), 255)
+					if to == "" {
+						// trimToLimit(strings.TrimSpace(e.RcptTo[i].User)+"@"+config.PrimaryHost, 255)
+						to = trimToLimit(strings.TrimSpace(e.RcptTo[i].String()), 255)
+					}
+					mid := trimToLimit(m.fillAddressFromHeader(e, "Message-Id"), 255)
+					if mid == "" {
+						mid = fmt.Sprintf("%s.%s@%s", hash, e.RcptTo[i].User, config.PrimaryHost)
+					}
+					// replyTo is the 'Reply-to' header, it may be blank
+					replyTo := trimToLimit(m.fillAddressFromHeader(e, "Reply-To"), 255)
+					// sender is the 'Sender' header, it may be blank
+					sender := trimToLimit(m.fillAddressFromHeader(e, "Sender"), 255)
+
+					recipient := trimToLimit(strings.TrimSpace(e.RcptTo[i].String()), 255)
+					contentType := ""
+					if v, ok := e.Header["Content-Type"]; ok {
+						contentType = trimToLimit(v[0], 255)
+					}
+
+					// build the values for the query
+					vals = []interface{}{} // clear the vals
+					vals = append(vals,
+						to,
+						trimToLimit(e.MailFrom.String(), 255), // from
+						trimToLimit(e.Subject, 255),
+						body, // body describes how to interpret the data, eg 'redis' means stored in redis, and 'gzip' stored in mysql, using gzip compression
+					)
+					// `mail` column
+					if body == "redis" {
+						// data already saved in redis
+						vals = append(vals, "")
+					} else if co != nil {
+						// use a compressor (automatically adds e.DeliveryHeader)
+						vals = append(vals, co.String())
+
+					} else {
+						vals = append(vals, e.String())
+					}
+
+					vals = append(vals,
+						hash, // hash (redis hash if saved in redis)
+						contentType,
+						recipient,
+						m.ip2bint(e.RemoteIP).Bytes(),         // ip_addr store as varbinary(16)
+						trimToLimit(e.MailFrom.String(), 255), // return_path
+						e.TLS,   // is_tls
+						mid,     // message_id
+						replyTo, // reply_to
+						sender,
+					)
+
+					stmt := m.prepareInsertQuery(1, db)
+					err := m.doQuery(1, db, stmt, &vals)
+					if err != nil {
+						return NewResult(fmt.Sprint("554 Error: could not save email")), StorageError
+					}
+				}
+
+				// continue to the next Processor in the decorator chain
+				return p.Process(e, task)
+			} else if task == TaskValidateRcpt {
+				// if you need to validate the e.Rcpt then change to:
+				if len(e.RcptTo) > 0 {
+					// since this is called each time a recipient is added
+					// validate only the _last_ recipient that was appended
+					last := e.RcptTo[len(e.RcptTo)-1]
+					if len(last.User) > 255 {
+						// return with an error
+						return NewResult(response.Canned.FailRcptCmd), NoSuchUser
+					}
+				}
+				// continue to the next processor
+				return p.Process(e, task)
+			} else {
+				return p.Process(e, task)
+			}
+
+		})
+	}
+}

+ 129 - 0
backends/p_redis.go

@@ -0,0 +1,129 @@
+package backends
+
+import (
+	"fmt"
+
+	"github.com/flashmob/go-guerrilla/mail"
+	"github.com/flashmob/go-guerrilla/response"
+
+	"github.com/garyburd/redigo/redis"
+)
+
+// ----------------------------------------------------------------------------------
+// Processor Name: redis
+// ----------------------------------------------------------------------------------
+// Description   : Saves the e.Data (email data) and e.DeliveryHeader together in redis
+//               : using the hash generated by the "hash" processor and stored in
+//               : e.Hashes
+// ----------------------------------------------------------------------------------
+// Config Options: redis_expire_seconds int - how many seconds to expiry
+//               : redis_interface string - <host>:<port> eg, 127.0.0.1:6379
+// --------------:-------------------------------------------------------------------
+// Input         : e.Data
+//               : e.DeliveryHeader generated by Header() processor
+//               :
+// ----------------------------------------------------------------------------------
+// Output        : Sets e.QueuedId with the first item fromHashes[0]
+// ----------------------------------------------------------------------------------
+func init() {
+
+	processors["redis"] = func() Decorator {
+		return Redis()
+	}
+}
+
+type RedisProcessorConfig struct {
+	RedisExpireSeconds int    `json:"redis_expire_seconds"`
+	RedisInterface     string `json:"redis_interface"`
+}
+
+type RedisProcessor struct {
+	isConnected bool
+	conn        redis.Conn
+}
+
+func (r *RedisProcessor) redisConnection(redisInterface string) (err error) {
+	if r.isConnected == false {
+		r.conn, err = redis.Dial("tcp", redisInterface)
+		if err != nil {
+			// handle error
+			return err
+		}
+		r.isConnected = true
+	}
+	return nil
+}
+
+// The redis decorator stores the email data in redis
+
+func Redis() Decorator {
+
+	var config *RedisProcessorConfig
+	redisClient := &RedisProcessor{}
+	// read the config into RedisProcessorConfig
+	Svc.AddInitializer(InitializeWith(func(backendConfig BackendConfig) error {
+		configType := BaseConfig(&RedisProcessorConfig{})
+		bcfg, err := Svc.ExtractConfig(backendConfig, configType)
+		if err != nil {
+			return err
+		}
+		config = bcfg.(*RedisProcessorConfig)
+		if redisErr := redisClient.redisConnection(config.RedisInterface); redisErr != nil {
+			err := fmt.Errorf("Redis cannot connect, check your settings: %s", redisErr)
+			return err
+		}
+		return nil
+	}))
+	// When shutting down
+	Svc.AddShutdowner(ShutdownWith(func() error {
+		if redisClient.isConnected {
+			return redisClient.conn.Close()
+		}
+		return nil
+	}))
+
+	var redisErr error
+
+	return func(p Processor) Processor {
+		return ProcessWith(func(e *mail.Envelope, task SelectTask) (Result, error) {
+
+			if task == TaskSaveMail {
+				hash := ""
+				if len(e.Hashes) > 0 {
+					e.QueuedId = e.Hashes[0]
+					hash = e.Hashes[0]
+					var stringer fmt.Stringer
+					// a compressor was set
+					if c, ok := e.Values["zlib-compressor"]; ok {
+						stringer = c.(*compressor)
+					} else {
+						stringer = e
+					}
+					redisErr = redisClient.redisConnection(config.RedisInterface)
+					if redisErr != nil {
+						Log().WithError(redisErr).Warn("Error while connecting to redis")
+						result := NewResult(response.Canned.FailBackendTransaction)
+						return result, redisErr
+					}
+					_, doErr := redisClient.conn.Do("SETEX", hash, config.RedisExpireSeconds, stringer)
+					if doErr != nil {
+						Log().WithError(doErr).Warn("Error while SETEX to redis")
+						result := NewResult(response.Canned.FailBackendTransaction)
+						return result, redisErr
+					}
+					e.Values["redis"] = "redis" // the next processor will know to look in redis for the message data
+				} else {
+					Log().Error("Redis needs a Hash() process before it")
+					result := NewResult(response.Canned.FailBackendTransaction)
+					return result, StorageError
+				}
+
+				return p.Process(e, task)
+			} else {
+				// nothing to do for this task
+				return p.Process(e, task)
+			}
+
+		})
+	}
+}

+ 51 - 0
backends/processor.go

@@ -0,0 +1,51 @@
+package backends
+
+import (
+	"github.com/flashmob/go-guerrilla/mail"
+)
+
+type SelectTask int
+
+const (
+	TaskSaveMail SelectTask = iota
+	TaskValidateRcpt
+)
+
+func (o SelectTask) String() string {
+	switch o {
+	case TaskSaveMail:
+		return "save mail"
+	case TaskValidateRcpt:
+		return "validate recipient"
+	}
+	return "[unnamed task]"
+}
+
+var BackendResultOK = NewResult("200 OK")
+
+// Our processor is defined as something that processes the envelope and returns a result and error
+type Processor interface {
+	Process(*mail.Envelope, SelectTask) (Result, error)
+}
+
+// Signature of Processor
+type ProcessWith func(*mail.Envelope, SelectTask) (Result, error)
+
+// Make ProcessWith will satisfy the Processor interface
+func (f ProcessWith) Process(e *mail.Envelope, task SelectTask) (Result, error) {
+	// delegate to the anonymous function
+	return f(e, task)
+}
+
+// DefaultProcessor is a undecorated worker that does nothing
+// Notice DefaultProcessor has no knowledge of the other decorators that have orthogonal concerns.
+type DefaultProcessor struct{}
+
+// do nothing except return the result
+// (this is the last call in the decorator stack, if it got here, then all is good)
+func (w DefaultProcessor) Process(e *mail.Envelope, task SelectTask) (Result, error) {
+	return BackendResultOK, nil
+}
+
+// if no processors specified, skip operation
+type NoopProcessor struct{ DefaultProcessor }

+ 1 - 1
backends/util.go

@@ -15,6 +15,7 @@ import (
 // Accounts for folding headers.
 // Accounts for folding headers.
 var headerRegex, _ = regexp.Compile(`^([\S ]+):([\S ]+(?:\r\n\s[\S ]+)?)`)
 var headerRegex, _ = regexp.Compile(`^([\S ]+):([\S ]+(?:\r\n\s[\S ]+)?)`)
 
 
+// ParseHeaders is deprecated, see mail.Envelope.ParseHeaders instead
 func ParseHeaders(mailData string) map[string]string {
 func ParseHeaders(mailData string) map[string]string {
 	var headerSectionEnds int
 	var headerSectionEnds int
 	for i, char := range mailData[:len(mailData)-4] {
 	for i, char := range mailData[:len(mailData)-4] {
@@ -25,7 +26,6 @@ func ParseHeaders(mailData string) map[string]string {
 		}
 		}
 	}
 	}
 	headers := make(map[string]string)
 	headers := make(map[string]string)
-	// TODO header comments and textproto Reader instead of regex
 	matches := headerRegex.FindAllStringSubmatch(mailData[:headerSectionEnds], -1)
 	matches := headerRegex.FindAllStringSubmatch(mailData[:headerSectionEnds], -1)
 	for _, h := range matches {
 	for _, h := range matches {
 		name := textproto.CanonicalMIMEHeaderKey(strings.TrimSpace(strings.Replace(h[1], "\r\n", "", -1)))
 		name := textproto.CanonicalMIMEHeaderKey(strings.TrimSpace(strings.Replace(h[1], "\r\n", "", -1)))

+ 17 - 0
backends/validate.go

@@ -0,0 +1,17 @@
+package backends
+
+import (
+	"errors"
+)
+
+type RcptError error
+
+var (
+	NoSuchUser          = RcptError(errors.New("no such user"))
+	StorageNotAvailable = RcptError(errors.New("storage not available"))
+	StorageTooBusy      = RcptError(errors.New("storage too busy"))
+	StorageTimeout      = RcptError(errors.New("storage timeout"))
+	QuotaExceeded       = RcptError(errors.New("quota exceeded"))
+	UserSuspended       = RcptError(errors.New("user suspended"))
+	StorageError        = RcptError(errors.New("storage error"))
+)

+ 13 - 21
client.go

@@ -5,8 +5,8 @@ import (
 	"bytes"
 	"bytes"
 	"crypto/tls"
 	"crypto/tls"
 	"fmt"
 	"fmt"
-	"github.com/flashmob/go-guerrilla/envelope"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
 	"net"
 	"net"
 	"net/textproto"
 	"net/textproto"
 	"sync"
 	"sync"
@@ -30,7 +30,7 @@ const (
 )
 )
 
 
 type client struct {
 type client struct {
-	*envelope.Envelope
+	*mail.Envelope
 	ID          uint64
 	ID          uint64
 	ConnectedAt time.Time
 	ConnectedAt time.Time
 	KilledAt    time.Time
 	KilledAt    time.Time
@@ -50,19 +50,20 @@ type client struct {
 	log       log.Logger
 	log       log.Logger
 }
 }
 
 
-// Allocate a new client
-func NewClient(conn net.Conn, clientID uint64, logger log.Logger) *client {
+// NewClient allocates a new client.
+func NewClient(conn net.Conn, clientID uint64, logger log.Logger, envelope *mail.Pool) *client {
 	c := &client{
 	c := &client{
 		conn: conn,
 		conn: conn,
-		Envelope: &envelope.Envelope{
-			RemoteAddress: getRemoteAddr(conn),
-		},
+		// Envelope will be borrowed from the envelope pool
+		// the envelope could be 'detached' from the client later when processing
+		Envelope:    envelope.Borrow(getRemoteAddr(conn), clientID),
 		ConnectedAt: time.Now(),
 		ConnectedAt: time.Now(),
 		bufin:       newSMTPBufferedReader(conn),
 		bufin:       newSMTPBufferedReader(conn),
 		bufout:      bufio.NewWriter(conn),
 		bufout:      bufio.NewWriter(conn),
 		ID:          clientID,
 		ID:          clientID,
 		log:         logger,
 		log:         logger,
 	}
 	}
+
 	// used for reading the DATA state
 	// used for reading the DATA state
 	c.smtpReader = textproto.NewReader(c.bufin.Reader)
 	c.smtpReader = textproto.NewReader(c.bufin.Reader)
 	return c
 	return c
@@ -112,18 +113,14 @@ func (c *client) sendResponse(r ...interface{}) {
 // -End of DATA command
 // -End of DATA command
 // TLS handhsake
 // TLS handhsake
 func (c *client) resetTransaction() {
 func (c *client) resetTransaction() {
-	c.MailFrom = envelope.EmailAddress{}
-	c.RcptTo = []envelope.EmailAddress{}
-	c.Data.Reset()
-	c.Subject = ""
-	c.Header = nil
+	c.Envelope.ResetTransaction()
 }
 }
 
 
 // isInTransaction returns true if the connection is inside a transaction.
 // isInTransaction returns true if the connection is inside a transaction.
 // A transaction starts after a MAIL command gets issued by the client.
 // A transaction starts after a MAIL command gets issued by the client.
 // Call resetTransaction to end the transaction
 // Call resetTransaction to end the transaction
 func (c *client) isInTransaction() bool {
 func (c *client) isInTransaction() bool {
-	isMailFromEmpty := c.MailFrom == (envelope.EmailAddress{})
+	isMailFromEmpty := c.MailFrom == (mail.Address{})
 	if isMailFromEmpty {
 	if isMailFromEmpty {
 		return false
 		return false
 	}
 	}
@@ -158,24 +155,19 @@ func (c *client) closeConn() {
 }
 }
 
 
 // init is called after the client is borrowed from the pool, to get it ready for the connection
 // init is called after the client is borrowed from the pool, to get it ready for the connection
-func (c *client) init(conn net.Conn, clientID uint64) {
+func (c *client) init(conn net.Conn, clientID uint64, ep *mail.Pool) {
 	c.conn = conn
 	c.conn = conn
 	// reset our reader & writer
 	// reset our reader & writer
 	c.bufout.Reset(conn)
 	c.bufout.Reset(conn)
 	c.bufin.Reset(conn)
 	c.bufin.Reset(conn)
-	// reset the data buffer, keep it allocated
-	c.Data.Reset()
 	// reset session data
 	// reset session data
 	c.state = 0
 	c.state = 0
 	c.KilledAt = time.Time{}
 	c.KilledAt = time.Time{}
 	c.ConnectedAt = time.Now()
 	c.ConnectedAt = time.Now()
 	c.ID = clientID
 	c.ID = clientID
-	c.TLS = false
 	c.errors = 0
 	c.errors = 0
-	c.Helo = ""
-	c.Header = nil
-	c.RemoteAddress = getRemoteAddr(conn)
-
+	// borrow an envelope from the envelope pool
+	c.Envelope = ep.Borrow(getRemoteAddr(conn), clientID)
 }
 }
 
 
 // getID returns the client's unique ID
 // getID returns the client's unique ID

+ 2 - 2
cmd/guerrillad/root.go

@@ -7,8 +7,8 @@ import (
 
 
 var rootCmd = &cobra.Command{
 var rootCmd = &cobra.Command{
 	Use:   "guerrillad",
 	Use:   "guerrillad",
-	Short: "small SMTP server",
-	Long: `It's a small SMTP server written in Go, for the purpose of receiving large volumes of email.
+	Short: "small SMTP daemon",
+	Long: `It's a small SMTP daemon written in Go, for the purpose of receiving large volumes of email.
 Written for GuerrillaMail.com which processes tens of thousands of emails every hour.`,
 Written for GuerrillaMail.com which processes tens of thousands of emails every hour.`,
 	Run: nil,
 	Run: nil,
 }
 }

+ 57 - 146
cmd/guerrillad/serve.go

@@ -1,18 +1,13 @@
 package main
 package main
 
 
 import (
 import (
-	"encoding/json"
-	"errors"
 	"fmt"
 	"fmt"
 	"github.com/flashmob/go-guerrilla"
 	"github.com/flashmob/go-guerrilla"
-	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/spf13/cobra"
 	"github.com/spf13/cobra"
-	"io/ioutil"
 	"os"
 	"os"
 	"os/exec"
 	"os/exec"
 	"os/signal"
 	"os/signal"
-	"reflect"
 	"strconv"
 	"strconv"
 	"strings"
 	"strings"
 	"syscall"
 	"syscall"
@@ -29,53 +24,64 @@ var (
 
 
 	serveCmd = &cobra.Command{
 	serveCmd = &cobra.Command{
 		Use:   "serve",
 		Use:   "serve",
-		Short: "start the small SMTP server",
+		Short: "start the daemon and start all available servers",
 		Run:   serve,
 		Run:   serve,
 	}
 	}
 
 
-	cmdConfig     = CmdConfig{}
-	signalChannel = make(chan os.Signal, 1) // for trapping SIG_HUP
+	signalChannel = make(chan os.Signal, 1) // for trapping SIGHUP and friends
 	mainlog       log.Logger
 	mainlog       log.Logger
+
+	d guerrilla.Daemon
 )
 )
 
 
 func init() {
 func init() {
 	// log to stderr on startup
 	// log to stderr on startup
-	var logOpenError error
-	if mainlog, logOpenError = log.GetLogger(log.OutputStderr.String()); logOpenError != nil {
-		mainlog.WithError(logOpenError).Errorf("Failed creating a logger to %s", log.OutputStderr)
+	var err error
+	mainlog, err = log.GetLogger(log.OutputStderr.String(), log.InfoLevel.String())
+	if err != nil {
+		mainlog.WithError(err).Errorf("Failed creating a logger to %s", log.OutputStderr)
+	}
+	cfgFile := "goguerrilla.conf" // deprecated default name
+	if _, err := os.Stat(cfgFile); err != nil {
+		cfgFile = "goguerrilla.conf.json" // use the new name
 	}
 	}
 	serveCmd.PersistentFlags().StringVarP(&configPath, "config", "c",
 	serveCmd.PersistentFlags().StringVarP(&configPath, "config", "c",
-		"goguerrilla.conf", "Path to the configuration file")
+		cfgFile, "Path to the configuration file")
 	// intentionally didn't specify default pidFile; value from config is used if flag is empty
 	// intentionally didn't specify default pidFile; value from config is used if flag is empty
 	serveCmd.PersistentFlags().StringVarP(&pidFile, "pidFile", "p",
 	serveCmd.PersistentFlags().StringVarP(&pidFile, "pidFile", "p",
 		"", "Path to the pid file")
 		"", "Path to the pid file")
-
 	rootCmd.AddCommand(serveCmd)
 	rootCmd.AddCommand(serveCmd)
 }
 }
 
 
-func sigHandler(app guerrilla.Guerrilla) {
-	// handle SIGHUP for reloading the configuration while running
-	signal.Notify(signalChannel, syscall.SIGHUP, syscall.SIGTERM, syscall.SIGQUIT, syscall.SIGINT, syscall.SIGKILL)
-
+func sigHandler() {
+	signal.Notify(signalChannel,
+		syscall.SIGHUP,
+		syscall.SIGTERM,
+		syscall.SIGQUIT,
+		syscall.SIGINT,
+		syscall.SIGKILL,
+		syscall.SIGUSR1,
+	)
 	for sig := range signalChannel {
 	for sig := range signalChannel {
 		if sig == syscall.SIGHUP {
 		if sig == syscall.SIGHUP {
-			// save old config & load in new one
-			oldConfig := cmdConfig
-			newConfig := CmdConfig{}
-			err := readConfig(configPath, pidFile, &newConfig)
-			if err != nil {
-				// new config will not be applied
-				mainlog.WithError(err).Error("Error while ReadConfig (reload)")
-				// re-open logs
-				cmdConfig.EmitLogReopenEvents(app)
+			if ac, err := readConfig(configPath, pidFile); err == nil {
+				d.ReloadConfig(*ac)
 			} else {
 			} else {
-				cmdConfig = newConfig
-				mainlog.Infof("Configuration was reloaded at %s", guerrilla.ConfigLoadTime)
-				cmdConfig.emitChangeEvents(&oldConfig, app)
+				mainlog.WithError(err).Error("Could not reload config")
 			}
 			}
+		} else if sig == syscall.SIGUSR1 {
+			d.ReopenLogs()
 		} else if sig == syscall.SIGTERM || sig == syscall.SIGQUIT || sig == syscall.SIGINT {
 		} else if sig == syscall.SIGTERM || sig == syscall.SIGQUIT || sig == syscall.SIGINT {
 			mainlog.Infof("Shutdown signal caught")
 			mainlog.Infof("Shutdown signal caught")
-			app.Shutdown()
+			go func() {
+				select {
+				// exit if graceful shutdown not finished in 60 sec.
+				case <-time.After(time.Second * 60):
+					mainlog.Error("graceful shutdown timed out")
+					os.Exit(1)
+				}
+			}()
+			d.Shutdown()
 			mainlog.Infof("Shutdown completed, exiting.")
 			mainlog.Infof("Shutdown completed, exiting.")
 			return
 			return
 		} else {
 		} else {
@@ -85,39 +91,21 @@ func sigHandler(app guerrilla.Guerrilla) {
 	}
 	}
 }
 }
 
 
-func subscribeBackendEvent(event guerrilla.Event, backend backends.Backend, app guerrilla.Guerrilla) {
-
-	app.Subscribe(event, func(cmdConfig *CmdConfig) {
-		logger, _ := log.GetLogger(cmdConfig.LogFile)
-		var err error
-		if err = backend.Shutdown(); err != nil {
-			logger.WithError(err).Warn("Backend failed to shutdown")
-			return
-		}
-		backend, err = backends.New(cmdConfig.BackendName, cmdConfig.BackendConfig, logger)
-		if err != nil {
-			logger.WithError(err).Fatalf("Error while loading the backend %q",
-				cmdConfig.BackendName)
-		} else {
-			logger.Info("Backend started:", cmdConfig.BackendName)
-		}
-	})
-}
-
 func serve(cmd *cobra.Command, args []string) {
 func serve(cmd *cobra.Command, args []string) {
 	logVersion()
 	logVersion()
-
-	err := readConfig(configPath, pidFile, &cmdConfig)
+	d = guerrilla.Daemon{Logger: mainlog}
+	ac, err := readConfig(configPath, pidFile)
 	if err != nil {
 	if err != nil {
 		mainlog.WithError(err).Fatal("Error while reading config")
 		mainlog.WithError(err).Fatal("Error while reading config")
 	}
 	}
+	d.SetConfig(*ac)
 
 
 	// Check that max clients is not greater than system open file limit.
 	// Check that max clients is not greater than system open file limit.
 	fileLimit := getFileLimit()
 	fileLimit := getFileLimit()
 
 
 	if fileLimit > 0 {
 	if fileLimit > 0 {
 		maxClients := 0
 		maxClients := 0
-		for _, s := range cmdConfig.Servers {
+		for _, s := range ac.Servers {
 			maxClients += s.MaxClients
 			maxClients += s.MaxClients
 		}
 		}
 		if maxClients > fileLimit {
 		if maxClients > fileLimit {
@@ -126,95 +114,35 @@ func serve(cmd *cobra.Command, args []string) {
 		}
 		}
 	}
 	}
 
 
-	// Backend setup
-	var backend backends.Backend
-	backend, err = backends.New(cmdConfig.BackendName, cmdConfig.BackendConfig, mainlog)
-	if err != nil {
-		mainlog.WithError(err).Fatalf("Error while loading the backend %q",
-			cmdConfig.BackendName)
-	}
-
-	app, err := guerrilla.New(&cmdConfig.AppConfig, backend, mainlog)
+	err = d.Start()
 	if err != nil {
 	if err != nil {
 		mainlog.WithError(err).Error("Error(s) when creating new server(s)")
 		mainlog.WithError(err).Error("Error(s) when creating new server(s)")
+		os.Exit(1)
 	}
 	}
-
-	// start the app
-	err = app.Start()
-	if err != nil {
-		mainlog.WithError(err).Error("Error(s) when starting server(s)")
-	}
-	subscribeBackendEvent(guerrilla.EvConfigBackendConfig, backend, app)
-	subscribeBackendEvent(guerrilla.EvConfigBackendName, backend, app)
-	// Write out our PID
-	writePid(cmdConfig.PidFile)
-	// ...and write out our pid whenever the file name changes in the config
-	app.Subscribe(guerrilla.EvConfigPidFile, func(ac *guerrilla.AppConfig) {
-		writePid(ac.PidFile)
-	})
-	// change the logger from stdrerr to one from config
-	mainlog.Infof("main log configured to %s", cmdConfig.LogFile)
-	var logOpenError error
-	if mainlog, logOpenError = log.GetLogger(cmdConfig.LogFile); logOpenError != nil {
-		mainlog.WithError(logOpenError).Errorf("Failed changing to a custom logger [%s]", cmdConfig.LogFile)
-	}
-	app.SetLogger(mainlog)
-	sigHandler(app)
+	sigHandler()
 
 
 }
 }
 
 
-// Superset of `guerrilla.AppConfig` containing options specific
-// the the command line interface.
-type CmdConfig struct {
-	guerrilla.AppConfig
-	BackendName   string                 `json:"backend_name"`
-	BackendConfig backends.BackendConfig `json:"backend_config"`
-}
-
-func (c *CmdConfig) load(jsonBytes []byte) error {
-	err := json.Unmarshal(jsonBytes, &c)
-	if err != nil {
-		return fmt.Errorf("Could not parse config file: %s", err.Error())
-	} else {
-		// load in guerrilla.AppConfig
-		return c.AppConfig.Load(jsonBytes)
-	}
-}
-
-func (c *CmdConfig) emitChangeEvents(oldConfig *CmdConfig, app guerrilla.Guerrilla) {
-	// has backend changed?
-	if !reflect.DeepEqual((*c).BackendConfig, (*oldConfig).BackendConfig) {
-		app.Publish(guerrilla.EvConfigBackendConfig, c)
-	}
-	if c.BackendName != oldConfig.BackendName {
-		app.Publish(guerrilla.EvConfigBackendName, c)
-	}
-	// call other emitChangeEvents
-	c.AppConfig.EmitChangeEvents(&oldConfig.AppConfig, app)
-}
-
-// ReadConfig which should be called at startup, or when a SIG_HUP is caught
-func readConfig(path string, pidFile string, config *CmdConfig) error {
-	// load in the config.
-	data, err := ioutil.ReadFile(path)
+// ReadConfig is called at startup, or when a SIG_HUP is caught
+func readConfig(path string, pidFile string) (*guerrilla.AppConfig, error) {
+	// Load in the config.
+	// Note here is the only place we can make an exception to the
+	// "treat config values as immutable". For example, here the
+	// command line flags can override config values
+	appConfig, err := d.LoadConfig(path)
 	if err != nil {
 	if err != nil {
-		return fmt.Errorf("Could not read config file: %s", err.Error())
-	}
-	if err := config.load(data); err != nil {
-		return err
+		return &appConfig, fmt.Errorf("Could not read config file: %s", err.Error())
 	}
 	}
 	// override config pidFile with with flag from the command line
 	// override config pidFile with with flag from the command line
 	if len(pidFile) > 0 {
 	if len(pidFile) > 0 {
-		config.AppConfig.PidFile = pidFile
-	} else if len(config.AppConfig.PidFile) == 0 {
-		config.AppConfig.PidFile = defaultPidFile
+		appConfig.PidFile = pidFile
+	} else if len(appConfig.PidFile) == 0 {
+		appConfig.PidFile = defaultPidFile
 	}
 	}
-
-	if len(config.AllowedHosts) == 0 {
-		return errors.New("Empty `allowed_hosts` is not allowed")
+	if verbose {
+		appConfig.LogLevel = "debug"
 	}
 	}
-	guerrilla.ConfigLoadTime = time.Now()
-	return nil
+	return &appConfig, nil
 }
 }
 
 
 func getFileLimit() int {
 func getFileLimit() int {
@@ -229,20 +157,3 @@ func getFileLimit() int {
 	}
 	}
 	return limit
 	return limit
 }
 }
-
-func writePid(pidFile string) {
-	if len(pidFile) > 0 {
-		if f, err := os.Create(pidFile); err == nil {
-			defer f.Close()
-			pid := os.Getpid()
-			if _, err := f.WriteString(fmt.Sprintf("%d", pid)); err == nil {
-				f.Sync()
-				mainlog.Infof("pid_file (%s) written with pid:%v", pidFile, pid)
-			} else {
-				mainlog.WithError(err).Fatalf("Error while writing pidFile (%s)", pidFile)
-			}
-		} else {
-			mainlog.WithError(err).Fatalf("Error while creating pidFile (%s)", pidFile)
-		}
-	}
-}

+ 209 - 85
cmd/guerrillad/serve_test.go

@@ -3,7 +3,6 @@ package main
 import (
 import (
 	"crypto/tls"
 	"crypto/tls"
 	"encoding/json"
 	"encoding/json"
-	"fmt"
 	"github.com/flashmob/go-guerrilla"
 	"github.com/flashmob/go-guerrilla"
 	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/flashmob/go-guerrilla/log"
@@ -33,8 +32,9 @@ var configJsonA = `
       "guerrillamail.net",
       "guerrillamail.net",
       "guerrillamail.org"
       "guerrillamail.org"
     ],
     ],
-    "backend_name": "dummy",
     "backend_config": {
     "backend_config": {
+    	"save_workers_size" : 1,
+    	"save_process": "HeadersParser|Debugger",
         "log_received_mails": true
         "log_received_mails": true
     },
     },
     "servers" : [
     "servers" : [
@@ -45,7 +45,7 @@ var configJsonA = `
             "private_key_file":"../..//tests/mail2.guerrillamail.com.key.pem",
             "private_key_file":"../..//tests/mail2.guerrillamail.com.key.pem",
             "public_key_file":"../../tests/mail2.guerrillamail.com.cert.pem",
             "public_key_file":"../../tests/mail2.guerrillamail.com.cert.pem",
             "timeout":180,
             "timeout":180,
-            "listen_interface":"127.0.0.1:25",
+            "listen_interface":"127.0.0.1:3536",
             "start_tls_on":true,
             "start_tls_on":true,
             "tls_always_on":false,
             "tls_always_on":false,
             "max_clients": 1000,
             "max_clients": 1000,
@@ -81,8 +81,9 @@ var configJsonB = `
       "guerrillamail.net",
       "guerrillamail.net",
       "guerrillamail.org"
       "guerrillamail.org"
     ],
     ],
-    "backend_name": "dummy",
     "backend_config": {
     "backend_config": {
+    	"save_workers_size" : 1,
+    	"save_process": "HeadersParser|Debugger",
         "log_received_mails": false
         "log_received_mails": false
     },
     },
     "servers" : [
     "servers" : [
@@ -93,7 +94,7 @@ var configJsonB = `
             "private_key_file":"../..//tests/mail2.guerrillamail.com.key.pem",
             "private_key_file":"../..//tests/mail2.guerrillamail.com.key.pem",
             "public_key_file":"../../tests/mail2.guerrillamail.com.cert.pem",
             "public_key_file":"../../tests/mail2.guerrillamail.com.cert.pem",
             "timeout":180,
             "timeout":180,
-            "listen_interface":"127.0.0.1:25",
+            "listen_interface":"127.0.0.1:3536",
             "start_tls_on":true,
             "start_tls_on":true,
             "tls_always_on":false,
             "tls_always_on":false,
             "max_clients": 1000,
             "max_clients": 1000,
@@ -127,7 +128,10 @@ var configJsonC = `
             "redis_interface" : "127.0.0.1:6379",
             "redis_interface" : "127.0.0.1:6379",
             "redis_expire_seconds" : 7200,
             "redis_expire_seconds" : 7200,
             "save_workers_size" : 3,
             "save_workers_size" : 3,
-            "primary_mail_host":"sharklasers.com"
+            "primary_mail_host":"sharklasers.com",
+            "save_workers_size" : 1,
+	    "save_process": "HeadersParser|Debugger",
+	    "log_received_mails": true
         },
         },
     "servers" : [
     "servers" : [
         {
         {
@@ -173,8 +177,9 @@ var configJsonD = `
       "guerrillamail.net",
       "guerrillamail.net",
       "guerrillamail.org"
       "guerrillamail.org"
     ],
     ],
-    "backend_name": "dummy",
     "backend_config": {
     "backend_config": {
+        "save_workers_size" : 1,
+    	"save_process": "HeadersParser|Debugger",
         "log_received_mails": false
         "log_received_mails": false
     },
     },
     "servers" : [
     "servers" : [
@@ -208,6 +213,65 @@ var configJsonD = `
 }
 }
 `
 `
 
 
+// adds 127.0.0.1:4655, a secure server
+var configJsonE = `
+{
+    "log_file" : "../../tests/testlog",
+    "log_level" : "debug",
+    "pid_file" : "./pidfile2.pid",
+    "allowed_hosts": [
+      "guerrillamail.com",
+      "guerrillamailblock.com",
+      "sharklasers.com",
+      "guerrillamail.net",
+      "guerrillamail.org"
+    ],
+    "backend_config" :
+        {
+            "save_process_old": "HeadersParser|Debugger|Hasher|Header|Compressor|Redis|MySql",
+            "save_process": "GuerrillaRedisDB",
+            "log_received_mails" : true,
+            "mysql_db":"gmail_mail",
+            "mysql_host":"127.0.0.1:3306",
+            "mysql_pass":"secret",
+            "mysql_user":"root",
+            "mail_table":"new_mail",
+            "redis_interface" : "127.0.0.1:6379",
+             "redis_expire_seconds" : 7200,
+            "save_workers_size" : 3,
+            "primary_mail_host":"sharklasers.com"
+        },
+    "servers" : [
+        {
+            "is_enabled" : true,
+            "host_name":"mail.test.com",
+            "max_size": 1000000,
+            "private_key_file":"../..//tests/mail2.guerrillamail.com.key.pem",
+            "public_key_file":"../../tests/mail2.guerrillamail.com.cert.pem",
+            "timeout":180,
+            "listen_interface":"127.0.0.1:2552",
+            "start_tls_on":true,
+            "tls_always_on":false,
+            "max_clients": 1000,
+            "log_file" : "../../tests/testlog"
+        },
+        {
+            "is_enabled" : true,
+            "host_name":"secure.test.com",
+            "max_size":1000000,
+            "private_key_file":"../..//tests/mail2.guerrillamail.com.key.pem",
+            "public_key_file":"../../tests/mail2.guerrillamail.com.cert.pem",
+            "timeout":180,
+            "listen_interface":"127.0.0.1:4655",
+            "start_tls_on":false,
+            "tls_always_on":true,
+            "max_clients":500,
+            "log_file" : "../../tests/testlog"
+        }
+    ]
+}
+`
+
 const testPauseDuration = time.Millisecond * 600
 const testPauseDuration = time.Millisecond * 600
 
 
 // reload config
 // reload config
@@ -243,59 +307,66 @@ func sigKill() {
 // make sure that we get all the config change events
 // make sure that we get all the config change events
 func TestCmdConfigChangeEvents(t *testing.T) {
 func TestCmdConfigChangeEvents(t *testing.T) {
 
 
-	oldconf := &CmdConfig{}
-	oldconf.load([]byte(configJsonA))
+	oldconf := &guerrilla.AppConfig{}
+	if err := oldconf.Load([]byte(configJsonA)); err != nil {
+		t.Error("configJsonA is invalid", err)
+	}
 
 
-	newconf := &CmdConfig{}
-	newconf.load([]byte(configJsonB))
+	newconf := &guerrilla.AppConfig{}
+	if err := newconf.Load([]byte(configJsonB)); err != nil {
+		t.Error("configJsonB is invalid", err)
+	}
 
 
-	newerconf := &CmdConfig{}
-	newerconf.load([]byte(configJsonC))
+	newerconf := &guerrilla.AppConfig{}
+	if err := newerconf.Load([]byte(configJsonC)); err != nil {
+		t.Error("configJsonC is invalid", err)
+	}
 
 
 	expectedEvents := map[guerrilla.Event]bool{
 	expectedEvents := map[guerrilla.Event]bool{
-		guerrilla.EvConfigBackendConfig: false,
-		guerrilla.EvConfigBackendName:   false,
-		guerrilla.EvConfigEvServerNew:   false,
+		guerrilla.EventConfigBackendConfig: false,
+		guerrilla.EventConfigServerNew:     false,
 	}
 	}
-	mainlog, _ = log.GetLogger("off")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 
 
 	bcfg := backends.BackendConfig{"log_received_mails": true}
 	bcfg := backends.BackendConfig{"log_received_mails": true}
-	backend, err := backends.New("dummy", bcfg, mainlog)
-	app, err := guerrilla.New(&oldconf.AppConfig, backend, mainlog)
+	backend, err := backends.New(bcfg, mainlog)
+	app, err := guerrilla.New(oldconf, backend, mainlog)
 	if err != nil {
 	if err != nil {
-		//log.Info("Failed to create new app", err)
+		t.Error("Failed to create new app", err)
 	}
 	}
-	toUnsubscribe := map[guerrilla.Event]func(c *CmdConfig){}
+	toUnsubscribe := map[guerrilla.Event]func(c *guerrilla.AppConfig){}
 	toUnsubscribeS := map[guerrilla.Event]func(c *guerrilla.ServerConfig){}
 	toUnsubscribeS := map[guerrilla.Event]func(c *guerrilla.ServerConfig){}
 
 
 	for event := range expectedEvents {
 	for event := range expectedEvents {
 		// Put in anon func since range is overwriting event
 		// Put in anon func since range is overwriting event
 		func(e guerrilla.Event) {
 		func(e guerrilla.Event) {
-
 			if strings.Index(e.String(), "server_change") == 0 {
 			if strings.Index(e.String(), "server_change") == 0 {
 				f := func(c *guerrilla.ServerConfig) {
 				f := func(c *guerrilla.ServerConfig) {
 					expectedEvents[e] = true
 					expectedEvents[e] = true
 				}
 				}
-				app.Subscribe(event, f)
-				toUnsubscribeS[event] = f
+				app.Subscribe(e, f)
+				toUnsubscribeS[e] = f
 			} else {
 			} else {
-				f := func(c *CmdConfig) {
+				f := func(c *guerrilla.AppConfig) {
 					expectedEvents[e] = true
 					expectedEvents[e] = true
 				}
 				}
-				app.Subscribe(event, f)
-				toUnsubscribe[event] = f
+				app.Subscribe(e, f)
+				toUnsubscribe[e] = f
 			}
 			}
 
 
 		}(event)
 		}(event)
 	}
 	}
 
 
 	// emit events
 	// emit events
-	newconf.emitChangeEvents(oldconf, app)
-	newerconf.emitChangeEvents(newconf, app)
+	newconf.EmitChangeEvents(oldconf, app)
+	newerconf.EmitChangeEvents(newconf, app)
 	// unsubscribe
 	// unsubscribe
 	for unevent, unfun := range toUnsubscribe {
 	for unevent, unfun := range toUnsubscribe {
 		app.Unsubscribe(unevent, unfun)
 		app.Unsubscribe(unevent, unfun)
 	}
 	}
+	for unevent, unfun := range toUnsubscribeS {
+		app.Unsubscribe(unevent, unfun)
+	}
 
 
 	for event, val := range expectedEvents {
 	for event, val := range expectedEvents {
 		if val == false {
 		if val == false {
@@ -311,9 +382,10 @@ func TestCmdConfigChangeEvents(t *testing.T) {
 
 
 // start server, change config, send SIG HUP, confirm that the pidfile changed & backend reloaded
 // start server, change config, send SIG HUP, confirm that the pidfile changed & backend reloaded
 func TestServe(t *testing.T) {
 func TestServe(t *testing.T) {
+
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 
 
-	mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 
 
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
@@ -344,12 +416,7 @@ func TestServe(t *testing.T) {
 	// Would not work on windows as kill is not available.
 	// Would not work on windows as kill is not available.
 	// TODO: Implement an alternative test for windows.
 	// TODO: Implement an alternative test for windows.
 	if runtime.GOOS != "windows" {
 	if runtime.GOOS != "windows" {
-		ecmd := exec.Command("kill", "-HUP", string(data))
-		_, err = ecmd.Output()
-		if err != nil {
-			t.Error("could not SIGHUP", err)
-			t.FailNow()
-		}
+		sigHup()
 		time.Sleep(testPauseDuration) // allow sighup to do its job
 		time.Sleep(testPauseDuration) // allow sighup to do its job
 		// did the pidfile change as expected?
 		// did the pidfile change as expected?
 		if _, err := os.Stat("./pidfile2.pid"); os.IsNotExist(err) {
 		if _, err := os.Stat("./pidfile2.pid"); os.IsNotExist(err) {
@@ -358,6 +425,7 @@ func TestServe(t *testing.T) {
 	}
 	}
 	// send kill signal and wait for exit
 	// send kill signal and wait for exit
 	sigKill()
 	sigKill()
+	// wait for exit
 	serveWG.Wait()
 	serveWG.Wait()
 
 
 	// did backend started as expected?
 	// did backend started as expected?
@@ -367,7 +435,7 @@ func TestServe(t *testing.T) {
 	}
 	}
 	if read, err := ioutil.ReadAll(fd); err == nil {
 	if read, err := ioutil.ReadAll(fd); err == nil {
 		logOutput := string(read)
 		logOutput := string(read)
-		if i := strings.Index(logOutput, "Backend started:dummy"); i < 0 {
+		if i := strings.Index(logOutput, "new backend started"); i < 0 {
 			t.Error("Dummy backend not restared")
 			t.Error("Dummy backend not restared")
 		}
 		}
 	}
 	}
@@ -386,7 +454,7 @@ func TestServe(t *testing.T) {
 // then connect to it & HELO.
 // then connect to it & HELO.
 func TestServerAddEvent(t *testing.T) {
 func TestServerAddEvent(t *testing.T) {
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
-	mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
@@ -399,8 +467,8 @@ func TestServerAddEvent(t *testing.T) {
 	}()
 	}()
 	time.Sleep(testPauseDuration) // allow the server to start
 	time.Sleep(testPauseDuration) // allow the server to start
 	// now change the config by adding a server
 	// now change the config by adding a server
-	conf := &CmdConfig{}                                 // blank one
-	conf.load([]byte(configJsonA))                       // load configJsonA
+	conf := &guerrilla.AppConfig{}                       // blank one
+	conf.Load([]byte(configJsonA))                       // load configJsonA
 	newServer := conf.Servers[0]                         // copy the first server config
 	newServer := conf.Servers[0]                         // copy the first server config
 	newServer.ListenInterface = "127.0.0.1:2526"         // change it
 	newServer.ListenInterface = "127.0.0.1:2526"         // change it
 	newConf := conf                                      // copy the cmdConfg
 	newConf := conf                                      // copy the cmdConfg
@@ -453,7 +521,7 @@ func TestServerAddEvent(t *testing.T) {
 // then connect to 127.0.0.1:2228 & HELO.
 // then connect to 127.0.0.1:2228 & HELO.
 func TestServerStartEvent(t *testing.T) {
 func TestServerStartEvent(t *testing.T) {
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
-	mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
@@ -466,8 +534,8 @@ func TestServerStartEvent(t *testing.T) {
 	}()
 	}()
 	time.Sleep(testPauseDuration)
 	time.Sleep(testPauseDuration)
 	// now change the config by adding a server
 	// now change the config by adding a server
-	conf := &CmdConfig{}           // blank one
-	conf.load([]byte(configJsonA)) // load configJsonA
+	conf := &guerrilla.AppConfig{} // blank one
+	conf.Load([]byte(configJsonA)) // load configJsonA
 
 
 	newConf := conf // copy the cmdConfg
 	newConf := conf // copy the cmdConfg
 	newConf.Servers[1].IsEnabled = true
 	newConf.Servers[1].IsEnabled = true
@@ -523,7 +591,7 @@ func TestServerStartEvent(t *testing.T) {
 
 
 func TestServerStopEvent(t *testing.T) {
 func TestServerStopEvent(t *testing.T) {
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
-	mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
@@ -536,8 +604,8 @@ func TestServerStopEvent(t *testing.T) {
 	}()
 	}()
 	time.Sleep(testPauseDuration)
 	time.Sleep(testPauseDuration)
 	// now change the config by enabling a server
 	// now change the config by enabling a server
-	conf := &CmdConfig{}           // blank one
-	conf.load([]byte(configJsonA)) // load configJsonA
+	conf := &guerrilla.AppConfig{} // blank one
+	conf.Load([]byte(configJsonA)) // load configJsonA
 
 
 	newConf := conf // copy the cmdConfg
 	newConf := conf // copy the cmdConfg
 	newConf.Servers[1].IsEnabled = true
 	newConf.Servers[1].IsEnabled = true
@@ -611,11 +679,11 @@ func TestServerStopEvent(t *testing.T) {
 
 
 func TestAllowedHostsEvent(t *testing.T) {
 func TestAllowedHostsEvent(t *testing.T) {
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
-	mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
-	conf := &CmdConfig{}           // blank one
-	conf.load([]byte(configJsonD)) // load configJsonD
+	conf := &guerrilla.AppConfig{} // blank one
+	conf.Load([]byte(configJsonD)) // load configJsonD
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
 	configPath = "configJsonD.json"
 	configPath = "configJsonD.json"
 	var serveWG sync.WaitGroup
 	var serveWG sync.WaitGroup
@@ -628,8 +696,8 @@ func TestAllowedHostsEvent(t *testing.T) {
 	time.Sleep(testPauseDuration)
 	time.Sleep(testPauseDuration)
 
 
 	// now connect and try RCPT TO with an invalid host
 	// now connect and try RCPT TO with an invalid host
-	if conn, buffin, err := test.Connect(conf.AppConfig.Servers[1], 20); err != nil {
-		t.Error("Could not connect to new server", conf.AppConfig.Servers[1].ListenInterface, err)
+	if conn, buffin, err := test.Connect(conf.Servers[1], 20); err != nil {
+		t.Error("Could not connect to new server", conf.Servers[1].ListenInterface, err)
 	} else {
 	} else {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 			expect := "250 secure.test.com Hello"
 			expect := "250 secure.test.com Hello"
@@ -649,7 +717,7 @@ func TestAllowedHostsEvent(t *testing.T) {
 
 
 	// now change the config by adding a host to allowed hosts
 	// now change the config by adding a host to allowed hosts
 
 
-	newConf := conf // copy the cmdConfg
+	newConf := conf
 	newConf.AllowedHosts = append(newConf.AllowedHosts, "grr.la")
 	newConf.AllowedHosts = append(newConf.AllowedHosts, "grr.la")
 	if jsonbytes, err := json.Marshal(newConf); err == nil {
 	if jsonbytes, err := json.Marshal(newConf); err == nil {
 		ioutil.WriteFile("configJsonD.json", []byte(jsonbytes), 0644)
 		ioutil.WriteFile("configJsonD.json", []byte(jsonbytes), 0644)
@@ -661,8 +729,8 @@ func TestAllowedHostsEvent(t *testing.T) {
 	time.Sleep(testPauseDuration) // pause for config to reload
 	time.Sleep(testPauseDuration) // pause for config to reload
 
 
 	// now repeat the same conversion, RCPT TO should be accepted
 	// now repeat the same conversion, RCPT TO should be accepted
-	if conn, buffin, err := test.Connect(conf.AppConfig.Servers[1], 20); err != nil {
-		t.Error("Could not connect to new server", conf.AppConfig.Servers[1].ListenInterface, err)
+	if conn, buffin, err := test.Connect(conf.Servers[1], 20); err != nil {
+		t.Error("Could not connect to new server", conf.Servers[1].ListenInterface, err)
 	} else {
 	} else {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 			expect := "250 secure.test.com Hello"
 			expect := "250 secure.test.com Hello"
@@ -690,7 +758,7 @@ func TestAllowedHostsEvent(t *testing.T) {
 		//fmt.Println(logOutput)
 		//fmt.Println(logOutput)
 		if i := strings.Index(logOutput, "allowed_hosts config changed, a new list was set"); i < 0 {
 		if i := strings.Index(logOutput, "allowed_hosts config changed, a new list was set"); i < 0 {
 			t.Errorf("did not change allowed_hosts, most likely because Bus.Subscribe(\"%s\" didnt fire",
 			t.Errorf("did not change allowed_hosts, most likely because Bus.Subscribe(\"%s\" didnt fire",
-				guerrilla.EvConfigAllowedHosts)
+				guerrilla.EventConfigAllowedHosts)
 		}
 		}
 	}
 	}
 	// cleanup
 	// cleanup
@@ -714,11 +782,11 @@ func TestTLSConfigEvent(t *testing.T) {
 	if _, err := os.Stat("../../tests/mail2.guerrillamail.com.cert.pem"); err != nil {
 	if _, err := os.Stat("../../tests/mail2.guerrillamail.com.cert.pem"); err != nil {
 		t.Error("Did not create cert ", err)
 		t.Error("Did not create cert ", err)
 	}
 	}
-	mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
-	conf := &CmdConfig{}           // blank one
-	conf.load([]byte(configJsonD)) // load configJsonD
+	conf := &guerrilla.AppConfig{} // blank one
+	conf.Load([]byte(configJsonD)) // load configJsonD
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
 	configPath = "configJsonD.json"
 	configPath = "configJsonD.json"
 	var serveWG sync.WaitGroup
 	var serveWG sync.WaitGroup
@@ -731,8 +799,8 @@ func TestTLSConfigEvent(t *testing.T) {
 
 
 	// Test STARTTLS handshake
 	// Test STARTTLS handshake
 	testTlsHandshake := func() {
 	testTlsHandshake := func() {
-		if conn, buffin, err := test.Connect(conf.AppConfig.Servers[0], 20); err != nil {
-			t.Error("Could not connect to server", conf.AppConfig.Servers[0].ListenInterface, err)
+		if conn, buffin, err := test.Connect(conf.Servers[0], 20); err != nil {
+			t.Error("Could not connect to server", conf.Servers[0].ListenInterface, err)
 		} else {
 		} else {
 			if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 			if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 				expect := "250 mail.test.com Hello"
 				expect := "250 mail.test.com Hello"
@@ -749,7 +817,7 @@ func TestTLSConfigEvent(t *testing.T) {
 								ServerName:         "127.0.0.1",
 								ServerName:         "127.0.0.1",
 							})
 							})
 							if err := tlsConn.Handshake(); err != nil {
 							if err := tlsConn.Handshake(); err != nil {
-								t.Error("Failed to handshake", conf.AppConfig.Servers[0].ListenInterface)
+								t.Error("Failed to handshake", conf.Servers[0].ListenInterface)
 							} else {
 							} else {
 								conn = tlsConn
 								conn = tlsConn
 								mainlog.Info("TLS Handshake succeeded")
 								mainlog.Info("TLS Handshake succeeded")
@@ -821,8 +889,8 @@ func TestBadTLSStart(t *testing.T) {
 		}
 		}
 		// next run the server
 		// next run the server
 		ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
 		ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
-		conf := &CmdConfig{}           // blank one
-		conf.load([]byte(configJsonD)) // load configJsonD
+		conf := &guerrilla.AppConfig{} // blank one
+		conf.Load([]byte(configJsonD)) // load configJsonD
 
 
 		cmd := &cobra.Command{}
 		cmd := &cobra.Command{}
 		configPath = "configJsonD.json"
 		configPath = "configJsonD.json"
@@ -856,13 +924,13 @@ func TestBadTLSStart(t *testing.T) {
 // Test config reload with a bad TLS config
 // Test config reload with a bad TLS config
 // It should ignore the config reload, keep running with old settings
 // It should ignore the config reload, keep running with old settings
 func TestBadTLSReload(t *testing.T) {
 func TestBadTLSReload(t *testing.T) {
-	mainlog, _ = log.GetLogger("../../tests/testlog")
-	// start with a good vert
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
+	// start with a good cert
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
-	conf := &CmdConfig{}           // blank one
-	conf.load([]byte(configJsonD)) // load configJsonD
+	conf := &guerrilla.AppConfig{} // blank one
+	conf.Load([]byte(configJsonD)) // load configJsonD
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
 	configPath = "configJsonD.json"
 	configPath = "configJsonD.json"
 	var serveWG sync.WaitGroup
 	var serveWG sync.WaitGroup
@@ -874,8 +942,8 @@ func TestBadTLSReload(t *testing.T) {
 	}()
 	}()
 	time.Sleep(testPauseDuration)
 	time.Sleep(testPauseDuration)
 
 
-	if conn, buffin, err := test.Connect(conf.AppConfig.Servers[0], 20); err != nil {
-		t.Error("Could not connect to server", conf.AppConfig.Servers[0].ListenInterface, err)
+	if conn, buffin, err := test.Connect(conf.Servers[0], 20); err != nil {
+		t.Error("Could not connect to server", conf.Servers[0].ListenInterface, err)
 	} else {
 	} else {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 			expect := "250 mail.test.com Hello"
 			expect := "250 mail.test.com Hello"
@@ -901,8 +969,8 @@ func TestBadTLSReload(t *testing.T) {
 
 
 	// we should still be able to to talk to it
 	// we should still be able to to talk to it
 
 
-	if conn, buffin, err := test.Connect(conf.AppConfig.Servers[0], 20); err != nil {
-		t.Error("Could not connect to server", conf.AppConfig.Servers[0].ListenInterface, err)
+	if conn, buffin, err := test.Connect(conf.Servers[0], 20); err != nil {
+		t.Error("Could not connect to server", conf.Servers[0].ListenInterface, err)
 	} else {
 	} else {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 			expect := "250 mail.test.com Hello"
 			expect := "250 mail.test.com Hello"
@@ -934,12 +1002,12 @@ func TestBadTLSReload(t *testing.T) {
 // Start with configJsonD.json
 // Start with configJsonD.json
 
 
 func TestSetTimeoutEvent(t *testing.T) {
 func TestSetTimeoutEvent(t *testing.T) {
-	mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
-	conf := &CmdConfig{}           // blank one
-	conf.load([]byte(configJsonD)) // load configJsonD
+	conf := &guerrilla.AppConfig{} // blank one
+	conf.Load([]byte(configJsonD)) // load configJsonD
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
 	configPath = "configJsonD.json"
 	configPath = "configJsonD.json"
 	var serveWG sync.WaitGroup
 	var serveWG sync.WaitGroup
@@ -966,8 +1034,8 @@ func TestSetTimeoutEvent(t *testing.T) {
 	time.Sleep(testPauseDuration) // config reload
 	time.Sleep(testPauseDuration) // config reload
 
 
 	var waitTimeout sync.WaitGroup
 	var waitTimeout sync.WaitGroup
-	if conn, buffin, err := test.Connect(conf.AppConfig.Servers[0], 20); err != nil {
-		t.Error("Could not connect to server", conf.AppConfig.Servers[0].ListenInterface, err)
+	if conn, buffin, err := test.Connect(conf.Servers[0], 20); err != nil {
+		t.Error("Could not connect to server", conf.Servers[0].ListenInterface, err)
 	} else {
 	} else {
 		waitTimeout.Add(1)
 		waitTimeout.Add(1)
 		go func() {
 		go func() {
@@ -996,7 +1064,7 @@ func TestSetTimeoutEvent(t *testing.T) {
 	fd, _ := os.Open("../../tests/testlog")
 	fd, _ := os.Open("../../tests/testlog")
 	if read, err := ioutil.ReadAll(fd); err == nil {
 	if read, err := ioutil.ReadAll(fd); err == nil {
 		logOutput := string(read)
 		logOutput := string(read)
-		fmt.Println(logOutput)
+		//fmt.Println(logOutput)
 		if i := strings.Index(logOutput, "i/o timeout"); i < 0 {
 		if i := strings.Index(logOutput, "i/o timeout"); i < 0 {
 			t.Error("Connection to 127.0.0.1:2552 didn't timeout as expected")
 			t.Error("Connection to 127.0.0.1:2552 didn't timeout as expected")
 		}
 		}
@@ -1012,12 +1080,12 @@ func TestSetTimeoutEvent(t *testing.T) {
 // Start in log_level = debug
 // Start in log_level = debug
 // Load config & start server
 // Load config & start server
 func TestDebugLevelChange(t *testing.T) {
 func TestDebugLevelChange(t *testing.T) {
-	//mainlog, _ = log.GetLogger("../../tests/testlog")
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
 	// start the server by emulating the serve command
 	// start the server by emulating the serve command
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
 	ioutil.WriteFile("configJsonD.json", []byte(configJsonD), 0644)
-	conf := &CmdConfig{}           // blank one
-	conf.load([]byte(configJsonD)) // load configJsonD
+	conf := &guerrilla.AppConfig{} // blank one
+	conf.Load([]byte(configJsonD)) // load configJsonD
 	conf.LogLevel = "debug"
 	conf.LogLevel = "debug"
 	cmd := &cobra.Command{}
 	cmd := &cobra.Command{}
 	configPath = "configJsonD.json"
 	configPath = "configJsonD.json"
@@ -1030,8 +1098,8 @@ func TestDebugLevelChange(t *testing.T) {
 	}()
 	}()
 	time.Sleep(testPauseDuration)
 	time.Sleep(testPauseDuration)
 
 
-	if conn, buffin, err := test.Connect(conf.AppConfig.Servers[0], 20); err != nil {
-		t.Error("Could not connect to server", conf.AppConfig.Servers[0].ListenInterface, err)
+	if conn, buffin, err := test.Connect(conf.Servers[0], 20); err != nil {
+		t.Error("Could not connect to server", conf.Servers[0].ListenInterface, err)
 	} else {
 	} else {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 		if result, err := test.Command(conn, buffin, "HELO"); err == nil {
 			expect := "250 mail.test.com Hello"
 			expect := "250 mail.test.com Hello"
@@ -1044,7 +1112,7 @@ func TestDebugLevelChange(t *testing.T) {
 	// set the log_level to info
 	// set the log_level to info
 
 
 	newConf := conf // copy the cmdConfg
 	newConf := conf // copy the cmdConfg
-	newConf.LogLevel = "info"
+	newConf.LogLevel = log.InfoLevel.String()
 	if jsonbytes, err := json.Marshal(newConf); err == nil {
 	if jsonbytes, err := json.Marshal(newConf); err == nil {
 		ioutil.WriteFile("configJsonD.json", []byte(jsonbytes), 0644)
 		ioutil.WriteFile("configJsonD.json", []byte(jsonbytes), 0644)
 	} else {
 	} else {
@@ -1055,8 +1123,8 @@ func TestDebugLevelChange(t *testing.T) {
 	time.Sleep(testPauseDuration) // log to change
 	time.Sleep(testPauseDuration) // log to change
 
 
 	// connect again, this time we should see info
 	// connect again, this time we should see info
-	if conn, buffin, err := test.Connect(conf.AppConfig.Servers[0], 20); err != nil {
-		t.Error("Could not connect to server", conf.AppConfig.Servers[0].ListenInterface, err)
+	if conn, buffin, err := test.Connect(conf.Servers[0], 20); err != nil {
+		t.Error("Could not connect to server", conf.Servers[0].ListenInterface, err)
 	} else {
 	} else {
 		if result, err := test.Command(conn, buffin, "NOOP"); err == nil {
 		if result, err := test.Command(conn, buffin, "NOOP"); err == nil {
 			expect := "200 2.0.0 OK"
 			expect := "200 2.0.0 OK"
@@ -1089,3 +1157,59 @@ func TestDebugLevelChange(t *testing.T) {
 	os.Remove("./pidfile.pid")
 	os.Remove("./pidfile.pid")
 
 
 }
 }
+
+// When reloading with a bad backend config, it should revert to old backend config
+func TestBadBackendReload(t *testing.T) {
+	testcert.GenerateCert("mail2.guerrillamail.com", "", 365*24*time.Hour, false, 2048, "P256", "../../tests/")
+
+	mainlog, _ = log.GetLogger("../../tests/testlog", "debug")
+
+	ioutil.WriteFile("configJsonA.json", []byte(configJsonA), 0644)
+	cmd := &cobra.Command{}
+	configPath = "configJsonA.json"
+	var serveWG sync.WaitGroup
+	serveWG.Add(1)
+	go func() {
+		serve(cmd, []string{})
+		serveWG.Done()
+	}()
+	time.Sleep(testPauseDuration)
+
+	// change the config file to the one with a broken backend
+	ioutil.WriteFile("configJsonA.json", []byte(configJsonE), 0644)
+
+	// test SIGHUP via the kill command
+	// Would not work on windows as kill is not available.
+	// TODO: Implement an alternative test for windows.
+	if runtime.GOOS != "windows" {
+		sigHup()
+		time.Sleep(testPauseDuration) // allow sighup to do its job
+		// did the pidfile change as expected?
+		if _, err := os.Stat("./pidfile2.pid"); os.IsNotExist(err) {
+			t.Error("pidfile not changed after sighup SIGHUP", err)
+		}
+	}
+
+	// send kill signal and wait for exit
+	sigKill()
+	serveWG.Wait()
+
+	// did backend started as expected?
+	fd, err := os.Open("../../tests/testlog")
+	if err != nil {
+		t.Error(err)
+	}
+	if read, err := ioutil.ReadAll(fd); err == nil {
+		logOutput := string(read)
+		if i := strings.Index(logOutput, "reverted to old backend config"); i < 0 {
+			t.Error("did not revert to old backend config")
+		}
+	}
+
+	// cleanup
+	os.Truncate("../../tests/testlog", 0)
+	os.Remove("configJsonA.json")
+	os.Remove("./pidfile.pid")
+	os.Remove("./pidfile2.pid")
+
+}

+ 194 - 108
config.go

@@ -5,6 +5,8 @@ import (
 	"encoding/json"
 	"encoding/json"
 	"errors"
 	"errors"
 	"fmt"
 	"fmt"
+	"github.com/flashmob/go-guerrilla/backends"
+	"github.com/flashmob/go-guerrilla/log"
 	"os"
 	"os"
 	"reflect"
 	"reflect"
 	"strings"
 	"strings"
@@ -12,97 +14,59 @@ import (
 
 
 // AppConfig is the holder of the configuration of the app
 // AppConfig is the holder of the configuration of the app
 type AppConfig struct {
 type AppConfig struct {
-	Servers      []ServerConfig `json:"servers"`
-	AllowedHosts []string       `json:"allowed_hosts"`
-	PidFile      string         `json:"pid_file"`
-	LogFile      string         `json:"log_file,omitempty"`
-	LogLevel     string         `json:"log_level,omitempty"`
+	// Servers can have one or more items.
+	/// Defaults to 1 server listening on 127.0.0.1:2525
+	Servers []ServerConfig `json:"servers"`
+	// AllowedHosts lists which hosts to accept email for. Defaults to os.Hostname
+	AllowedHosts []string `json:"allowed_hosts"`
+	// PidFile is the path for writing out the process id. No output if empty
+	PidFile string `json:"pid_file"`
+	// LogFile is where the logs go. Use path to file, or "stderr", "stdout"
+	// or "off". Default "stderr"
+	LogFile string `json:"log_file,omitempty"`
+	// LogLevel controls the lowest level we log.
+	// "info", "debug", "error", "panic". Default "info"
+	LogLevel string `json:"log_level,omitempty"`
+	// BackendConfig configures the email envelope processing backend
+	BackendConfig backends.BackendConfig `json:"backend_config"`
 }
 }
 
 
 // ServerConfig specifies config options for a single server
 // ServerConfig specifies config options for a single server
 type ServerConfig struct {
 type ServerConfig struct {
-	IsEnabled       bool   `json:"is_enabled"`
-	Hostname        string `json:"host_name"`
-	MaxSize         int64  `json:"max_size"`
-	PrivateKeyFile  string `json:"private_key_file"`
-	PublicKeyFile   string `json:"public_key_file"`
-	Timeout         int    `json:"timeout"`
+	// IsEnabled set to true to start the server, false will ignore it
+	IsEnabled bool `json:"is_enabled"`
+	// Hostname will be used in the server's reply to HELO/EHLO. If TLS enabled
+	// make sure that the Hostname matches the cert. Defaults to os.Hostname()
+	Hostname string `json:"host_name"`
+	// MaxSize is the maximum size of an email that will be accepted for delivery.
+	// Defaults to 10 Mebibytes
+	MaxSize int64 `json:"max_size"`
+	// PrivateKeyFile path to cert private key in PEM format. Will be ignored if blank
+	PrivateKeyFile string `json:"private_key_file"`
+	// PublicKeyFile path to cert (public key) chain in PEM format.
+	// Will be ignored if blank
+	PublicKeyFile string `json:"public_key_file"`
+	// Timeout specifies the connection timeout in seconds. Defaults to 30
+	Timeout int `json:"timeout"`
+	// Listen interface specified in <ip>:<port> - defaults to 127.0.0.1:2525
 	ListenInterface string `json:"listen_interface"`
 	ListenInterface string `json:"listen_interface"`
-	StartTLSOn      bool   `json:"start_tls_on,omitempty"`
-	TLSAlwaysOn     bool   `json:"tls_always_on,omitempty"`
-	MaxClients      int    `json:"max_clients"`
-	LogFile         string `json:"log_file,omitempty"`
+	// StartTLSOn should we offer STARTTLS command. Cert must be valid.
+	// False by default
+	StartTLSOn bool `json:"start_tls_on,omitempty"`
+	// TLSAlwaysOn run this server as a pure TLS server, i.e. SMTPS
+	TLSAlwaysOn bool `json:"tls_always_on,omitempty"`
+	// MaxClients controls how many maxiumum clients we can handle at once.
+	// Defaults to 100
+	MaxClients int `json:"max_clients"`
+	// LogFile is where the logs go. Use path to file, or "stderr", "stdout" or "off".
+	// defaults to AppConfig.Log file setting
+	LogFile string `json:"log_file,omitempty"`
 
 
+	// The following used to watch certificate changes so that the TLS can be reloaded
 	_privateKeyFile_mtime int
 	_privateKeyFile_mtime int
 	_publicKeyFile_mtime  int
 	_publicKeyFile_mtime  int
 }
 }
 
 
-type Event int
-
-const (
-	// when a new config was loaded
-	EvConfigNewConfig Event = iota
-	// when allowed_hosts changed
-	EvConfigAllowedHosts
-	// when pid_file changed
-	EvConfigPidFile
-	// when log_file changed
-	EvConfigLogFile
-	// when it's time to reload the main log file
-	EvConfigLogReopen
-	// when log level changed
-	EvConfigLogLevel
-	// when the backend changed
-	EvConfigBackendName
-	// when the backend's config changed
-	EvConfigBackendConfig
-	// when a new server was added
-	EvConfigEvServerNew
-	// when an existing server was removed
-	EvConfigServerRemove
-	// when a new server config was detected (general event)
-	EvConfigServerConfig
-	// when a server was enabled
-	EvConfigServerStart
-	// when a server was disabled
-	EvConfigServerStop
-	// when a server's log file changed
-	EvConfigServerLogFile
-	// when it's time to reload the server's log
-	EvConfigServerLogReopen
-	// when a server's timeout changed
-	EvConfigServerTimeout
-	// when a server's max clients changed
-	EvConfigServerMaxClients
-	// when a server's TLS config changed
-	EvConfigServerTLSConfig
-)
-
-var configEvents = [...]string{
-	"config_change:new_config",
-	"config_change:allowed_hosts",
-	"config_change:pid_file",
-	"config_change:log_file",
-	"config_change:reopen_log_file",
-	"config_change:log_level",
-	"config_change:backend_config",
-	"config_change:backend_name",
-	"server_change:new_server",
-	"server_change:remove_server",
-	"server_change:update_config",
-	"server_change:start_server",
-	"server_change:stop_server",
-	"server_change:new_log_file",
-	"server_change:reopen_log_file",
-	"server_change:timeout",
-	"server_change:max_clients",
-	"server_change:tls_config",
-}
-
-func (e Event) String() string {
-	return configEvents[e]
-}
-
 // Unmarshalls json data into AppConfig struct and any other initialization of the struct
 // Unmarshalls json data into AppConfig struct and any other initialization of the struct
 // also does validation, returns error if validation failed or something went wrong
 // also does validation, returns error if validation failed or something went wrong
 func (c *AppConfig) Load(jsonBytes []byte) error {
 func (c *AppConfig) Load(jsonBytes []byte) error {
@@ -110,8 +74,11 @@ func (c *AppConfig) Load(jsonBytes []byte) error {
 	if err != nil {
 	if err != nil {
 		return fmt.Errorf("could not parse config file: %s", err)
 		return fmt.Errorf("could not parse config file: %s", err)
 	}
 	}
-	if len(c.AllowedHosts) == 0 {
-		return errors.New("empty AllowedHosts is not allowed")
+	if err = c.setDefaults(); err != nil {
+		return err
+	}
+	if err = c.setBackendDefaults(); err != nil {
+		return err
 	}
 	}
 
 
 	// all servers must be valid in order to continue
 	// all servers must be valid in order to continue
@@ -130,28 +97,29 @@ func (c *AppConfig) Load(jsonBytes []byte) error {
 
 
 // Emits any configuration change events onto the event bus.
 // Emits any configuration change events onto the event bus.
 func (c *AppConfig) EmitChangeEvents(oldConfig *AppConfig, app Guerrilla) {
 func (c *AppConfig) EmitChangeEvents(oldConfig *AppConfig, app Guerrilla) {
+	// has backend changed?
+	if !reflect.DeepEqual((*c).BackendConfig, (*oldConfig).BackendConfig) {
+		app.Publish(EventConfigBackendConfig, c)
+	}
 	// has config changed, general check
 	// has config changed, general check
 	if !reflect.DeepEqual(oldConfig, c) {
 	if !reflect.DeepEqual(oldConfig, c) {
-		app.Publish(EvConfigNewConfig, c)
+		app.Publish(EventConfigNewConfig, c)
 	}
 	}
 	// has 'allowed hosts' changed?
 	// has 'allowed hosts' changed?
 	if !reflect.DeepEqual(oldConfig.AllowedHosts, c.AllowedHosts) {
 	if !reflect.DeepEqual(oldConfig.AllowedHosts, c.AllowedHosts) {
-		app.Publish(EvConfigAllowedHosts, c)
+		app.Publish(EventConfigAllowedHosts, c)
 	}
 	}
 	// has pid file changed?
 	// has pid file changed?
 	if strings.Compare(oldConfig.PidFile, c.PidFile) != 0 {
 	if strings.Compare(oldConfig.PidFile, c.PidFile) != 0 {
-		app.Publish(EvConfigPidFile, c)
+		app.Publish(EventConfigPidFile, c)
 	}
 	}
 	// has mainlog log changed?
 	// has mainlog log changed?
 	if strings.Compare(oldConfig.LogFile, c.LogFile) != 0 {
 	if strings.Compare(oldConfig.LogFile, c.LogFile) != 0 {
-		app.Publish(EvConfigLogFile, c)
-	} else {
-		// since config file has not changed, we reload it
-		app.Publish(EvConfigLogReopen, c)
+		app.Publish(EventConfigLogFile, c)
 	}
 	}
 	// has log level changed?
 	// has log level changed?
 	if strings.Compare(oldConfig.LogLevel, c.LogLevel) != 0 {
 	if strings.Compare(oldConfig.LogLevel, c.LogLevel) != 0 {
-		app.Publish(EvConfigLogLevel, c)
+		app.Publish(EventConfigLogLevel, c)
 	}
 	}
 	// server config changes
 	// server config changes
 	oldServers := oldConfig.getServers()
 	oldServers := oldConfig.getServers()
@@ -164,21 +132,21 @@ func (c *AppConfig) EmitChangeEvents(oldConfig *AppConfig, app Guerrilla) {
 			newServer.emitChangeEvents(oldServer, app)
 			newServer.emitChangeEvents(oldServer, app)
 		} else {
 		} else {
 			// start new server
 			// start new server
-			app.Publish(EvConfigEvServerNew, newServer)
+			app.Publish(EventConfigServerNew, newServer)
 		}
 		}
 
 
 	}
 	}
 	// remove any servers that don't exist anymore
 	// remove any servers that don't exist anymore
 	for _, oldserver := range oldServers {
 	for _, oldserver := range oldServers {
-		app.Publish(EvConfigServerRemove, oldserver)
+		app.Publish(EventConfigServerRemove, oldserver)
 	}
 	}
 }
 }
 
 
 // EmitLogReopen emits log reopen events using existing config
 // EmitLogReopen emits log reopen events using existing config
 func (c *AppConfig) EmitLogReopenEvents(app Guerrilla) {
 func (c *AppConfig) EmitLogReopenEvents(app Guerrilla) {
-	app.Publish(EvConfigLogReopen, c)
+	app.Publish(EventConfigLogReopen, c)
 	for _, sc := range c.getServers() {
 	for _, sc := range c.getServers() {
-		app.Publish(EvConfigServerLogReopen, sc)
+		app.Publish(EventConfigServerLogReopen, sc)
 	}
 	}
 }
 }
 
 
@@ -191,6 +159,114 @@ func (c *AppConfig) getServers() map[string]*ServerConfig {
 	return servers
 	return servers
 }
 }
 
 
+// setDefaults fills in default server settings for values that were not configured
+// The defaults are:
+// * Server listening to 127.0.0.1:2525
+// * use your hostname to determine your which hosts to accept email for
+// * 100 maximum clients
+// * 10MB max message size
+// * log to Stderr,
+// * log level set to "`debug`"
+// * timeout to 30 sec
+// * Backend configured with the following processors: `HeadersParser|Header|Debugger`
+// where it will log the received emails.
+func (c *AppConfig) setDefaults() error {
+	if c.LogFile == "" {
+		c.LogFile = log.OutputStderr.String()
+	}
+	if c.LogLevel == "" {
+		c.LogLevel = "debug"
+	}
+	if len(c.AllowedHosts) == 0 {
+		if h, err := os.Hostname(); err != nil {
+			return err
+		} else {
+			c.AllowedHosts = append(c.AllowedHosts, h)
+		}
+	}
+	h, err := os.Hostname()
+	if err != nil {
+		return err
+	}
+	if len(c.Servers) == 0 {
+		sc := ServerConfig{}
+		sc.LogFile = c.LogFile
+		sc.ListenInterface = defaultInterface
+		sc.IsEnabled = true
+		sc.Hostname = h
+		sc.MaxClients = 100
+		sc.Timeout = 30
+		sc.MaxSize = 10 << 20 // 10 Mebibytes
+		c.Servers = append(c.Servers, sc)
+	} else {
+		// make sure each server has defaults correctly configured
+		for i := range c.Servers {
+			if c.Servers[i].Hostname == "" {
+				c.Servers[i].Hostname = h
+			}
+			if c.Servers[i].MaxClients == 0 {
+				c.Servers[i].MaxClients = 100
+			}
+			if c.Servers[i].Timeout == 0 {
+				c.Servers[i].Timeout = 20
+			}
+			if c.Servers[i].MaxSize == 0 {
+				c.Servers[i].MaxSize = 10 << 20 // 10 Mebibytes
+			}
+			if c.Servers[i].ListenInterface == "" {
+				return errors.New(fmt.Sprintf("Listen interface not specified for server at index %d", i))
+			}
+			if c.Servers[i].LogFile == "" {
+				c.Servers[i].LogFile = c.LogFile
+			}
+			// validate the server config
+			err = c.Servers[i].Validate()
+			if err != nil {
+				return err
+			}
+		}
+	}
+	return nil
+}
+
+// setBackendDefaults sets default values for the backend config,
+// if no backend config was added before starting, then use a default config
+// otherwise, see what required values were missed in the config and add any missing with defaults
+func (c *AppConfig) setBackendDefaults() error {
+
+	if len(c.BackendConfig) == 0 {
+		h, err := os.Hostname()
+		if err != nil {
+			return err
+		}
+		c.BackendConfig = backends.BackendConfig{
+			"log_received_mails": true,
+			"save_workers_size":  1,
+			"save_process":       "HeadersParser|Header|Debugger",
+			"primary_mail_host":  h,
+		}
+	} else {
+		if _, ok := c.BackendConfig["save_process"]; !ok {
+			c.BackendConfig["save_process"] = "HeadersParser|Header|Debugger"
+		}
+		if _, ok := c.BackendConfig["primary_mail_host"]; !ok {
+			h, err := os.Hostname()
+			if err != nil {
+				return err
+			}
+			c.BackendConfig["primary_mail_host"] = h
+		}
+		if _, ok := c.BackendConfig["save_workers_size"]; !ok {
+			c.BackendConfig["save_workers_size"] = 1
+		}
+
+		if _, ok := c.BackendConfig["log_received_mails"]; !ok {
+			c.BackendConfig["log_received_mails"] = false
+		}
+	}
+	return nil
+}
+
 // Emits any configuration change events on the server.
 // Emits any configuration change events on the server.
 // All events are fired and run synchronously
 // All events are fired and run synchronously
 func (sc *ServerConfig) emitChangeEvents(oldServer *ServerConfig, app Guerrilla) {
 func (sc *ServerConfig) emitChangeEvents(oldServer *ServerConfig, app Guerrilla) {
@@ -201,33 +277,33 @@ func (sc *ServerConfig) emitChangeEvents(oldServer *ServerConfig, app Guerrilla)
 	)
 	)
 	if len(changes) > 0 {
 	if len(changes) > 0 {
 		// something changed in the server config
 		// something changed in the server config
-		app.Publish(EvConfigServerConfig, sc)
+		app.Publish(EventConfigServerConfig, sc)
 	}
 	}
 
 
 	// enable or disable?
 	// enable or disable?
 	if _, ok := changes["IsEnabled"]; ok {
 	if _, ok := changes["IsEnabled"]; ok {
 		if sc.IsEnabled {
 		if sc.IsEnabled {
-			app.Publish(EvConfigServerStart, sc)
+			app.Publish(EventConfigServerStart, sc)
 		} else {
 		} else {
-			app.Publish(EvConfigServerStop, sc)
+			app.Publish(EventConfigServerStop, sc)
 		}
 		}
 		// do not emit any more events when IsEnabled changed
 		// do not emit any more events when IsEnabled changed
 		return
 		return
 	}
 	}
 	// log file change?
 	// log file change?
 	if _, ok := changes["LogFile"]; ok {
 	if _, ok := changes["LogFile"]; ok {
-		app.Publish(EvConfigServerLogFile, sc)
+		app.Publish(EventConfigServerLogFile, sc)
 	} else {
 	} else {
 		// since config file has not changed, we reload it
 		// since config file has not changed, we reload it
-		app.Publish(EvConfigServerLogReopen, sc)
+		app.Publish(EventConfigServerLogReopen, sc)
 	}
 	}
 	// timeout changed
 	// timeout changed
 	if _, ok := changes["Timeout"]; ok {
 	if _, ok := changes["Timeout"]; ok {
-		app.Publish(EvConfigServerTimeout, sc)
+		app.Publish(EventConfigServerTimeout, sc)
 	}
 	}
 	// max_clients changed
 	// max_clients changed
 	if _, ok := changes["MaxClients"]; ok {
 	if _, ok := changes["MaxClients"]; ok {
-		app.Publish(EvConfigServerMaxClients, sc)
+		app.Publish(EventConfigServerMaxClients, sc)
 	}
 	}
 
 
 	// tls changed
 	// tls changed
@@ -246,7 +322,7 @@ func (sc *ServerConfig) emitChangeEvents(oldServer *ServerConfig, app Guerrilla)
 		}
 		}
 		return false
 		return false
 	}(); ok {
 	}(); ok {
-		app.Publish(EvConfigServerTLSConfig, sc)
+		app.Publish(EventConfigServerTLSConfig, sc)
 	}
 	}
 }
 }
 
 
@@ -279,16 +355,26 @@ func (sc *ServerConfig) getTlsKeyTimestamps() (int, int) {
 }
 }
 
 
 // Validate validates the server's configuration.
 // Validate validates the server's configuration.
-func (sc *ServerConfig) Validate() Errors {
+func (sc *ServerConfig) Validate() error {
 	var errs Errors
 	var errs Errors
-	if _, err := tls.LoadX509KeyPair(sc.PublicKeyFile, sc.PrivateKeyFile); err != nil {
-		if sc.StartTLSOn || sc.TLSAlwaysOn {
+
+	if sc.StartTLSOn || sc.TLSAlwaysOn {
+		if sc.PublicKeyFile == "" {
+			errs = append(errs, errors.New("PublicKeyFile is empty"))
+		}
+		if sc.PrivateKeyFile == "" {
+			errs = append(errs, errors.New("PrivateKeyFile is empty"))
+		}
+		if _, err := tls.LoadX509KeyPair(sc.PublicKeyFile, sc.PrivateKeyFile); err != nil {
 			errs = append(errs,
 			errs = append(errs,
 				errors.New(fmt.Sprintf("cannot use TLS config for [%s], %v", sc.ListenInterface, err)))
 				errors.New(fmt.Sprintf("cannot use TLS config for [%s], %v", sc.ListenInterface, err)))
 		}
 		}
-
 	}
 	}
-	return errs
+	if len(errs) > 0 {
+		return errs
+	}
+
+	return nil
 }
 }
 
 
 // Returns a diff between struct a & struct b.
 // Returns a diff between struct a & struct b.

+ 28 - 22
config_test.go

@@ -22,9 +22,8 @@ var configJsonA = `
 {
 {
     "log_file" : "./tests/testlog",
     "log_file" : "./tests/testlog",
     "log_level" : "debug",
     "log_level" : "debug",
-    "pid_file" : "/var/run/go-guerrilla.pid",
+    "pid_file" : "tests/go-guerrilla.pid",
     "allowed_hosts": ["spam4.me","grr.la"],
     "allowed_hosts": ["spam4.me","grr.la"],
-    "backend_name" : "dummy",
     "backend_config" :
     "backend_config" :
         {
         {
             "log_received_mails" : true
             "log_received_mails" : true
@@ -96,9 +95,8 @@ var configJsonB = `
 {
 {
     "log_file" : "./tests/testlog",
     "log_file" : "./tests/testlog",
     "log_level" : "debug",
     "log_level" : "debug",
-    "pid_file" : "/var/run/different-go-guerrilla.pid",
+    "pid_file" : "tests/different-go-guerrilla.pid",
     "allowed_hosts": ["spam4.me","grr.la","newhost.com"],
     "allowed_hosts": ["spam4.me","grr.la","newhost.com"],
-    "backend_name" : "dummy",
     "backend_config" :
     "backend_config" :
         {
         {
             "log_received_mails" : true
             "log_received_mails" : true
@@ -126,6 +124,7 @@ var configJsonB = `
             "listen_interface":"127.0.0.1:2527",
             "listen_interface":"127.0.0.1:2527",
             "start_tls_on":true,
             "start_tls_on":true,
             "tls_always_on":false,
             "tls_always_on":false,
+            "log_file" : "./tests/testlog",
             "max_clients": 2
             "max_clients": 2
         },
         },
 
 
@@ -138,7 +137,7 @@ var configJsonB = `
             "timeout":180,
             "timeout":180,
             "listen_interface":"127.0.0.1:4654",
             "listen_interface":"127.0.0.1:4654",
             "start_tls_on":false,
             "start_tls_on":false,
-            "tls_always_on":true,
+            "tls_always_on":false,
             "max_clients":1
             "max_clients":1
         },
         },
 
 
@@ -197,33 +196,40 @@ func TestConfigChangeEvents(t *testing.T) {
 
 
 	oldconf := &AppConfig{}
 	oldconf := &AppConfig{}
 	oldconf.Load([]byte(configJsonA))
 	oldconf.Load([]byte(configJsonA))
-	logger, _ := log.GetLogger(oldconf.LogFile)
+	logger, _ := log.GetLogger(oldconf.LogFile, oldconf.LogLevel)
 	bcfg := backends.BackendConfig{"log_received_mails": true}
 	bcfg := backends.BackendConfig{"log_received_mails": true}
-	backend, _ := backends.New("dummy", bcfg, logger)
-	app, _ := New(oldconf, backend, logger)
+	backend, err := backends.New(bcfg, logger)
+	if err != nil {
+		t.Error("cannot create backend", err)
+	}
+	app, err := New(oldconf, backend, logger)
+	if err != nil {
+		t.Error("cannot create daemon", err)
+	}
 	// simulate timestamp change
 	// simulate timestamp change
+
 	time.Sleep(time.Second + time.Millisecond*500)
 	time.Sleep(time.Second + time.Millisecond*500)
 	os.Chtimes(oldconf.Servers[1].PrivateKeyFile, time.Now(), time.Now())
 	os.Chtimes(oldconf.Servers[1].PrivateKeyFile, time.Now(), time.Now())
 	os.Chtimes(oldconf.Servers[1].PublicKeyFile, time.Now(), time.Now())
 	os.Chtimes(oldconf.Servers[1].PublicKeyFile, time.Now(), time.Now())
 	newconf := &AppConfig{}
 	newconf := &AppConfig{}
 	newconf.Load([]byte(configJsonB))
 	newconf.Load([]byte(configJsonB))
-	newconf.Servers[0].LogFile = "off" // test for log file change
-	newconf.LogLevel = "off"
+	newconf.Servers[0].LogFile = log.OutputOff.String() // test for log file change
+	newconf.LogLevel = log.InfoLevel.String()
 	newconf.LogFile = "off"
 	newconf.LogFile = "off"
 	expectedEvents := map[Event]bool{
 	expectedEvents := map[Event]bool{
-		EvConfigPidFile:         false,
-		EvConfigLogFile:         false,
-		EvConfigLogLevel:        false,
-		EvConfigAllowedHosts:    false,
-		EvConfigEvServerNew:     false, // 127.0.0.1:4654 will be added
-		EvConfigServerRemove:    false, // 127.0.0.1:9999 server removed
-		EvConfigServerStop:      false, // 127.0.0.1:3333: server (disabled)
-		EvConfigServerLogFile:   false, // 127.0.0.1:2526
-		EvConfigServerLogReopen: false, // 127.0.0.1:2527
-		EvConfigServerTimeout:   false, // 127.0.0.1:2526 timeout
+		EventConfigPidFile:         false,
+		EventConfigLogFile:         false,
+		EventConfigLogLevel:        false,
+		EventConfigAllowedHosts:    false,
+		EventConfigServerNew:       false, // 127.0.0.1:4654 will be added
+		EventConfigServerRemove:    false, // 127.0.0.1:9999 server removed
+		EventConfigServerStop:      false, // 127.0.0.1:3333: server (disabled)
+		EventConfigServerLogFile:   false, // 127.0.0.1:2526
+		EventConfigServerLogReopen: false, // 127.0.0.1:2527
+		EventConfigServerTimeout:   false, // 127.0.0.1:2526 timeout
 		//"server_change:tls_config":    false, // 127.0.0.1:2526
 		//"server_change:tls_config":    false, // 127.0.0.1:2526
-		EvConfigServerMaxClients: false, // 127.0.0.1:2526
-		EvConfigServerTLSConfig:  false, // 127.0.0.1:2527 timestamp changed on certificates
+		EventConfigServerMaxClients: false, // 127.0.0.1:2526
+		EventConfigServerTLSConfig:  false, // 127.0.0.1:2527 timestamp changed on certificates
 	}
 	}
 	toUnsubscribe := map[Event]func(c *AppConfig){}
 	toUnsubscribe := map[Event]func(c *AppConfig){}
 	toUnsubscribeSrv := map[Event]func(c *ServerConfig){}
 	toUnsubscribeSrv := map[Event]func(c *ServerConfig){}

+ 0 - 186
envelope/envelope.go

@@ -1,186 +0,0 @@
-package envelope
-
-import (
-	"bufio"
-	"bytes"
-	"encoding/base64"
-	"errors"
-	"fmt"
-	"github.com/sloonz/go-qprintable"
-	"gopkg.in/iconv.v1"
-	"io/ioutil"
-	"net/textproto"
-	"regexp"
-	"strings"
-)
-
-// EmailAddress encodes an email address of the form `<user@host>`
-type EmailAddress struct {
-	User string
-	Host string
-}
-
-func (ep *EmailAddress) String() string {
-	return fmt.Sprintf("%s@%s", ep.User, ep.Host)
-}
-
-func (ep *EmailAddress) IsEmpty() bool {
-	return ep.User == "" && ep.Host == ""
-}
-
-// Email represents a single SMTP message.
-type Envelope struct {
-	// Remote IP address
-	RemoteAddress string
-	// Message sent in EHLO command
-	Helo string
-	// Sender
-	MailFrom EmailAddress
-	// Recipients
-	RcptTo []EmailAddress
-	// Data stores the header and message body
-	Data bytes.Buffer
-	// Subject stores the subject of the email, extracted and decoded after calling ParseHeaders()
-	Subject string
-	// TLS is true if the email was received using a TLS connection
-	TLS bool
-	// Header stores the results from ParseHeaders()
-	Header textproto.MIMEHeader
-}
-
-// ParseHeaders parses the headers into Header field of the Envelope struct.
-// Data buffer must be full before calling.
-// It assumes that at most 30kb of email data can be a header
-// Decoding of encoding to UTF is only done on the Subject, where the result is assigned to the Subject field
-func (e *Envelope) ParseHeaders() error {
-	var err error
-	if e.Header != nil {
-		return errors.New("Headers already parsed")
-	}
-	all := e.Data.Bytes()
-
-	// find where the header ends, assuming that over 30 kb would be max
-	max := 1024 * 30
-	if len(all) < max {
-		max = len(all) - 1
-	}
-	headerEnd := bytes.Index(all[:max], []byte("\n\n"))
-
-	if headerEnd > -1 {
-		headerReader := textproto.NewReader(bufio.NewReader(bytes.NewBuffer(all[0:headerEnd])))
-		e.Header, err = headerReader.ReadMIMEHeader()
-		if err != nil {
-			// decode the subject
-			if subject, ok := e.Header["Subject"]; ok {
-				e.Subject = MimeHeaderDecode(subject[0])
-			}
-		}
-	} else {
-		err = errors.New("header not found")
-	}
-	return err
-}
-
-var mimeRegex, _ = regexp.Compile(`=\?(.+?)\?([QBqp])\?(.+?)\?=`)
-
-// Decode strings in Mime header format
-// eg. =?ISO-2022-JP?B?GyRCIVo9dztSOWJAOCVBJWMbKEI=?=
-func MimeHeaderDecode(str string) string {
-
-	matched := mimeRegex.FindAllStringSubmatch(str, -1)
-	var charset, encoding, payload string
-	if matched != nil {
-		for i := 0; i < len(matched); i++ {
-			if len(matched[i]) > 2 {
-				charset = matched[i][1]
-				encoding = strings.ToUpper(matched[i][2])
-				payload = matched[i][3]
-				switch encoding {
-				case "B":
-					str = strings.Replace(
-						str,
-						matched[i][0],
-						MailTransportDecode(payload, "base64", charset),
-						1)
-				case "Q":
-					str = strings.Replace(
-						str,
-						matched[i][0],
-						MailTransportDecode(payload, "quoted-printable", charset),
-						1)
-				}
-			}
-		}
-	}
-	return str
-}
-
-// decode from 7bit to 8bit UTF-8
-// encodingType can be "base64" or "quoted-printable"
-func MailTransportDecode(str string, encodingType string, charset string) string {
-	if charset == "" {
-		charset = "UTF-8"
-	} else {
-		charset = strings.ToUpper(charset)
-	}
-	if encodingType == "base64" {
-		str = fromBase64(str)
-	} else if encodingType == "quoted-printable" {
-		str = fromQuotedP(str)
-	}
-
-	if charset != "UTF-8" {
-		charset = fixCharset(charset)
-		// iconv is pretty good at what it does
-		if cd, err := iconv.Open("UTF-8", charset); err == nil {
-			defer func() {
-				cd.Close()
-				if r := recover(); r != nil {
-					//logln(1, fmt.Sprintf("Recovered in %v", r))
-				}
-			}()
-			// eg. charset can be "ISO-2022-JP"
-			return cd.ConvString(str)
-		}
-
-	}
-	return str
-}
-
-func fromBase64(data string) string {
-	buf := bytes.NewBufferString(data)
-	decoder := base64.NewDecoder(base64.StdEncoding, buf)
-	res, _ := ioutil.ReadAll(decoder)
-	return string(res)
-}
-
-func fromQuotedP(data string) string {
-	buf := bytes.NewBufferString(data)
-	decoder := qprintable.NewDecoder(qprintable.BinaryEncoding, buf)
-	res, _ := ioutil.ReadAll(decoder)
-	return string(res)
-}
-
-var charsetRegex, _ = regexp.Compile(`[_:.\/\\]`)
-
-func fixCharset(charset string) string {
-	fixed_charset := charsetRegex.ReplaceAllString(charset, "-")
-	// Fix charset
-	// borrowed from http://squirrelmail.svn.sourceforge.net/viewvc/squirrelmail/trunk/squirrelmail/include/languages.php?revision=13765&view=markup
-	// OE ks_c_5601_1987 > cp949
-	fixed_charset = strings.Replace(fixed_charset, "ks-c-5601-1987", "cp949", -1)
-	// Moz x-euc-tw > euc-tw
-	fixed_charset = strings.Replace(fixed_charset, "x-euc", "euc", -1)
-	// Moz x-windows-949 > cp949
-	fixed_charset = strings.Replace(fixed_charset, "x-windows_", "cp", -1)
-	// windows-125x and cp125x charsets
-	fixed_charset = strings.Replace(fixed_charset, "windows-", "cp", -1)
-	// ibm > cp
-	fixed_charset = strings.Replace(fixed_charset, "ibm", "cp", -1)
-	// iso-8859-8-i -> iso-8859-8
-	fixed_charset = strings.Replace(fixed_charset, "iso-8859-8-i", "iso-8859-8", -1)
-	if charset != fixed_charset {
-		return fixed_charset
-	}
-	return charset
-}

+ 89 - 0
event.go

@@ -0,0 +1,89 @@
+package guerrilla
+
+import (
+	evbus "github.com/asaskevich/EventBus"
+)
+
+type Event int
+
+const (
+	// when a new config was loaded
+	EventConfigNewConfig Event = iota
+	// when allowed_hosts changed
+	EventConfigAllowedHosts
+	// when pid_file changed
+	EventConfigPidFile
+	// when log_file changed
+	EventConfigLogFile
+	// when it's time to reload the main log file
+	EventConfigLogReopen
+	// when log level changed
+	EventConfigLogLevel
+	// when the backend's config changed
+	EventConfigBackendConfig
+	// when a new server was added
+	EventConfigServerNew
+	// when an existing server was removed
+	EventConfigServerRemove
+	// when a new server config was detected (general event)
+	EventConfigServerConfig
+	// when a server was enabled
+	EventConfigServerStart
+	// when a server was disabled
+	EventConfigServerStop
+	// when a server's log file changed
+	EventConfigServerLogFile
+	// when it's time to reload the server's log
+	EventConfigServerLogReopen
+	// when a server's timeout changed
+	EventConfigServerTimeout
+	// when a server's max clients changed
+	EventConfigServerMaxClients
+	// when a server's TLS config changed
+	EventConfigServerTLSConfig
+	//
+	EventConfigPostLoad
+)
+
+var eventList = [...]string{
+	"config_change:new_config",
+	"config_change:allowed_hosts",
+	"config_change:pid_file",
+	"config_change:log_file",
+	"config_change:reopen_log_file",
+	"config_change:log_level",
+	"config_change:backend_config",
+	"server_change:new_server",
+	"server_change:remove_server",
+	"server_change:update_config",
+	"server_change:start_server",
+	"server_change:stop_server",
+	"server_change:new_log_file",
+	"server_change:reopen_log_file",
+	"server_change:timeout",
+	"server_change:max_clients",
+	"server_change:tls_config",
+}
+
+func (e Event) String() string {
+	return eventList[e]
+}
+
+type EventHandler struct {
+	*evbus.EventBus
+}
+
+func (h *EventHandler) Subscribe(topic Event, fn interface{}) error {
+	if h.EventBus == nil {
+		h.EventBus = evbus.New()
+	}
+	return h.EventBus.Subscribe(topic.String(), fn)
+}
+
+func (h *EventHandler) Publish(topic Event, args ...interface{}) {
+	h.EventBus.Publish(topic.String(), args...)
+}
+
+func (h *EventHandler) Unsubscribe(topic Event, handler interface{}) error {
+	return h.EventBus.Unsubscribe(topic.String(), handler)
+}

+ 3 - 12
glide.lock

@@ -1,5 +1,5 @@
-hash: 1a8fddacae80af03b3cb67287a2a39396a4b38a0c7f805f63325293a19230853
-updated: 2017-02-01T09:34:30.442747666+11:00
+hash: edbacc9b8ae3fcad4c01969c3efc5d815d79ffdc544d0bd56c501018696c2285
+updated: 2017-03-17T11:29:21.745184616+11:00
 imports:
 imports:
 - name: github.com/asaskevich/EventBus
 - name: github.com/asaskevich/EventBus
   version: ab9e5ceb2cc1ca6f36a5813c928c534e837681c2
   version: ab9e5ceb2cc1ca6f36a5813c928c534e837681c2
@@ -13,20 +13,11 @@ imports:
 - name: github.com/inconshreveable/mousetrap
 - name: github.com/inconshreveable/mousetrap
   version: 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
   version: 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
 - name: github.com/Sirupsen/logrus
 - name: github.com/Sirupsen/logrus
-  version: d26492970760ca5d33129d2d799e34be5c4782eb
-- name: github.com/sloonz/go-qprintable
-  version: 775b3a4592d5bfc47b0ba398ec0d4dba018e5926
+  version: ba1b36c82c5e05c4f912a88eab0dcd91a171688f
 - name: github.com/spf13/cobra
 - name: github.com/spf13/cobra
   version: b62566898a99f2db9c68ed0026aa0a052e59678d
   version: b62566898a99f2db9c68ed0026aa0a052e59678d
 - name: github.com/spf13/pflag
 - name: github.com/spf13/pflag
   version: 25f8b5b07aece3207895bf19f7ab517eb3b22a40
   version: 25f8b5b07aece3207895bf19f7ab517eb3b22a40
-- name: github.com/ziutek/mymysql
-  version: e08c2f35356576b3c3690c252fe5dca728ae9292
-  subpackages:
-  - autorc
-  - godrv
-  - mysql
-  - native
 - name: golang.org/x/sys
 - name: golang.org/x/sys
   version: 478fcf54317e52ab69f40bb4c7a1520288d7f7ea
   version: 478fcf54317e52ab69f40bb4c7a1520288d7f7ea
   subpackages:
   subpackages:

+ 0 - 6
glide.yaml

@@ -6,13 +6,7 @@ import:
   version: ~1.0.0
   version: ~1.0.0
   subpackages:
   subpackages:
   - redis
   - redis
-- package: github.com/sloonz/go-qprintable
 - package: github.com/spf13/cobra
 - package: github.com/spf13/cobra
-- package: github.com/ziutek/mymysql
-  version: ~1.5.4
-  subpackages:
-  - autorc
-  - godrv
 - package: gopkg.in/iconv.v1
 - package: gopkg.in/iconv.v1
   version: ~1.1.1
   version: ~1.1.1
 - package: github.com/asaskevich/EventBus
 - package: github.com/asaskevich/EventBus

+ 5 - 3
goguerrilla.conf.sample

@@ -9,9 +9,11 @@
       "guerrillamail.org"
       "guerrillamail.org"
     ],
     ],
     "pid_file" : "/var/run/go-guerrilla.pid",
     "pid_file" : "/var/run/go-guerrilla.pid",
-    "backend_name": "dummy",
     "backend_config": {
     "backend_config": {
-        "log_received_mails": true
+        "log_received_mails": true,
+        "save_workers_size": 1,
+        "save_process" : "HeadersParser|Header|Debugger",
+        "primary_mail_host" : "mail.example.com"
     },
     },
     "servers" : [
     "servers" : [
         {
         {
@@ -28,7 +30,7 @@
             "log_file" : "stderr"
             "log_file" : "stderr"
         },
         },
         {
         {
-            "is_enabled" : true,
+            "is_enabled" : false,
             "host_name":"mail.test.com",
             "host_name":"mail.test.com",
             "max_size":1000000,
             "max_size":1000000,
             "private_key_file":"/path/to/pem/file/test.com.key",
             "private_key_file":"/path/to/pem/file/test.com.key",

+ 159 - 67
guerrilla.go

@@ -2,9 +2,10 @@ package guerrilla
 
 
 import (
 import (
 	"errors"
 	"errors"
-	evbus "github.com/asaskevich/EventBus"
+	"fmt"
 	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/flashmob/go-guerrilla/log"
+	"os"
 	"sync"
 	"sync"
 	"sync/atomic"
 	"sync/atomic"
 )
 )
@@ -45,51 +46,67 @@ type Guerrilla interface {
 type guerrilla struct {
 type guerrilla struct {
 	Config  AppConfig
 	Config  AppConfig
 	servers map[string]*server
 	servers map[string]*server
-	backend backends.Backend
 	// guard controls access to g.servers
 	// guard controls access to g.servers
 	guard sync.Mutex
 	guard sync.Mutex
 	state int8
 	state int8
-	bus   *evbus.EventBus
+	EventHandler
 	logStore
 	logStore
+	backendStore
 }
 }
 
 
 type logStore struct {
 type logStore struct {
 	atomic.Value
 	atomic.Value
 }
 }
 
 
+type backendStore struct {
+	atomic.Value
+}
+
 // Get loads the log.logger in an atomic operation. Returns a stderr logger if not able to load
 // Get loads the log.logger in an atomic operation. Returns a stderr logger if not able to load
 func (ls *logStore) mainlog() log.Logger {
 func (ls *logStore) mainlog() log.Logger {
 	if v, ok := ls.Load().(log.Logger); ok {
 	if v, ok := ls.Load().(log.Logger); ok {
 		return v
 		return v
 	}
 	}
-	l, _ := log.GetLogger(log.OutputStderr.String())
+	l, _ := log.GetLogger(log.OutputStderr.String(), log.InfoLevel.String())
 	return l
 	return l
 }
 }
 
 
 // storeMainlog stores the log value in an atomic operation
 // storeMainlog stores the log value in an atomic operation
-func (ls *logStore) storeMainlog(log log.Logger) {
+func (ls *logStore) setMainlog(log log.Logger) {
 	ls.Store(log)
 	ls.Store(log)
 }
 }
 
 
-// Returns a new instance of Guerrilla with the given config, not yet running.
+// Returns a new instance of Guerrilla with the given config, not yet running. Backend started.
 func New(ac *AppConfig, b backends.Backend, l log.Logger) (Guerrilla, error) {
 func New(ac *AppConfig, b backends.Backend, l log.Logger) (Guerrilla, error) {
 	g := &guerrilla{
 	g := &guerrilla{
 		Config:  *ac, // take a local copy
 		Config:  *ac, // take a local copy
 		servers: make(map[string]*server, len(ac.Servers)),
 		servers: make(map[string]*server, len(ac.Servers)),
-		backend: b,
-		bus:     evbus.New(),
 	}
 	}
-	g.storeMainlog(l)
+	g.backendStore.Store(b)
+	g.setMainlog(l)
 
 
 	if ac.LogLevel != "" {
 	if ac.LogLevel != "" {
-		g.mainlog().SetLevel(ac.LogLevel)
+		if h, ok := l.(*log.HookedLogger); ok {
+			if h, err := log.GetLogger(h.GetLogDest(), ac.LogLevel); err == nil {
+				g.setMainlog(h)
+			}
+		}
 	}
 	}
 
 
 	g.state = GuerrillaStateNew
 	g.state = GuerrillaStateNew
 	err := g.makeServers()
 	err := g.makeServers()
 
 
+	// start backend for processing email
+	err = g.backend().Start()
+
+	if err != nil {
+		return g, err
+	}
+	g.writePid()
+
 	// subscribe for any events that may come in while running
 	// subscribe for any events that may come in while running
 	g.subscribeEvents()
 	g.subscribeEvents()
+
 	return g, err
 	return g, err
 }
 }
 
 
@@ -102,12 +119,12 @@ func (g *guerrilla) makeServers() error {
 			// server already instantiated
 			// server already instantiated
 			continue
 			continue
 		}
 		}
-		if errs := sc.Validate(); errs != nil {
+		if err := sc.Validate(); err != nil {
 			g.mainlog().WithError(errs).Errorf("Failed to create server [%s]", sc.ListenInterface)
 			g.mainlog().WithError(errs).Errorf("Failed to create server [%s]", sc.ListenInterface)
-			errs = append(errs, errs...)
+			errs = append(errs, err)
 			continue
 			continue
 		} else {
 		} else {
-			server, err := newServer(&sc, g.backend, g.mainlog())
+			server, err := newServer(&sc, g.backend(), g.mainlog())
 			if err != nil {
 			if err != nil {
 				g.mainlog().WithError(err).Errorf("Failed to create server [%s]", sc.ListenInterface)
 				g.mainlog().WithError(err).Errorf("Failed to create server [%s]", sc.ListenInterface)
 				errs = append(errs, err)
 				errs = append(errs, err)
@@ -127,7 +144,7 @@ func (g *guerrilla) makeServers() error {
 	return errs
 	return errs
 }
 }
 
 
-// find a server by interface, retuning the server or err
+// findServer finds a server by iface (interface), retuning the server or err
 func (g *guerrilla) findServer(iface string) (*server, error) {
 func (g *guerrilla) findServer(iface string) (*server, error) {
 	g.guard.Lock()
 	g.guard.Lock()
 	defer g.guard.Unlock()
 	defer g.guard.Unlock()
@@ -137,6 +154,7 @@ func (g *guerrilla) findServer(iface string) (*server, error) {
 	return nil, errors.New("server not found in g.servers")
 	return nil, errors.New("server not found in g.servers")
 }
 }
 
 
+// removeServer removes a server from the list of servers
 func (g *guerrilla) removeServer(iface string) {
 func (g *guerrilla) removeServer(iface string) {
 	g.guard.Lock()
 	g.guard.Lock()
 	defer g.guard.Unlock()
 	defer g.guard.Unlock()
@@ -174,12 +192,12 @@ func (g *guerrilla) mapServers(callback func(*server)) map[string]*server {
 func (g *guerrilla) subscribeEvents() {
 func (g *guerrilla) subscribeEvents() {
 
 
 	// main config changed
 	// main config changed
-	g.Subscribe(EvConfigNewConfig, func(c *AppConfig) {
+	g.Subscribe(EventConfigNewConfig, func(c *AppConfig) {
 		g.setConfig(c)
 		g.setConfig(c)
 	})
 	})
 
 
 	// allowed_hosts changed, set for all servers
 	// allowed_hosts changed, set for all servers
-	g.Subscribe(EvConfigAllowedHosts, func(c *AppConfig) {
+	g.Subscribe(EventConfigAllowedHosts, func(c *AppConfig) {
 		g.mapServers(func(server *server) {
 		g.mapServers(func(server *server) {
 			server.setAllowedHosts(c.AllowedHosts)
 			server.setAllowedHosts(c.AllowedHosts)
 		})
 		})
@@ -187,16 +205,16 @@ func (g *guerrilla) subscribeEvents() {
 	})
 	})
 
 
 	// the main log file changed
 	// the main log file changed
-	g.Subscribe(EvConfigLogFile, func(c *AppConfig) {
+	g.Subscribe(EventConfigLogFile, func(c *AppConfig) {
 		var err error
 		var err error
 		var l log.Logger
 		var l log.Logger
-		if l, err = log.GetLogger(c.LogFile); err == nil {
-			g.storeMainlog(l)
+		if l, err = log.GetLogger(c.LogFile, c.LogLevel); err == nil {
+			g.setMainlog(l)
 			g.mapServers(func(server *server) {
 			g.mapServers(func(server *server) {
 				// it will change server's logger when the next client gets accepted
 				// it will change server's logger when the next client gets accepted
 				server.mainlogStore.Store(l)
 				server.mainlogStore.Store(l)
 			})
 			})
-			g.mainlog().Infof("main log for new clients changed to to [%s]", c.LogFile)
+			g.mainlog().Infof("main log for new clients changed to [%s]", c.LogFile)
 		} else {
 		} else {
 			g.mainlog().WithError(err).Errorf("main logging change failed [%s]", c.LogFile)
 			g.mainlog().WithError(err).Errorf("main logging change failed [%s]", c.LogFile)
 		}
 		}
@@ -204,31 +222,41 @@ func (g *guerrilla) subscribeEvents() {
 	})
 	})
 
 
 	// re-open the main log file (file not changed)
 	// re-open the main log file (file not changed)
-	g.Subscribe(EvConfigLogReopen, func(c *AppConfig) {
+	g.Subscribe(EventConfigLogReopen, func(c *AppConfig) {
 		g.mainlog().Reopen()
 		g.mainlog().Reopen()
 		g.mainlog().Infof("re-opened main log file [%s]", c.LogFile)
 		g.mainlog().Infof("re-opened main log file [%s]", c.LogFile)
 	})
 	})
 
 
 	// when log level changes, apply to mainlog and server logs
 	// when log level changes, apply to mainlog and server logs
-	g.Subscribe(EvConfigLogLevel, func(c *AppConfig) {
-		g.mainlog().SetLevel(c.LogLevel)
-		g.mapServers(func(server *server) {
-			server.log.SetLevel(c.LogLevel)
-		})
-		g.mainlog().Infof("log level changed to [%s]", c.LogLevel)
+	g.Subscribe(EventConfigLogLevel, func(c *AppConfig) {
+		l, err := log.GetLogger(g.mainlog().GetLogDest(), c.LogLevel)
+		if err == nil {
+			g.logStore.Store(l)
+			g.mapServers(func(server *server) {
+				server.logStore.Store(l)
+			})
+			g.mainlog().Infof("log level changed to [%s]", c.LogLevel)
+		}
+	})
+
+	// write out our pid whenever the file name changes in the config
+	g.Subscribe(EventConfigPidFile, func(ac *AppConfig) {
+		g.writePid()
 	})
 	})
 
 
 	// server config was updated
 	// server config was updated
-	g.Subscribe(EvConfigServerConfig, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerConfig, func(sc *ServerConfig) {
 		g.setServerConfig(sc)
 		g.setServerConfig(sc)
 	})
 	})
 
 
 	// add a new server to the config & start
 	// add a new server to the config & start
-	g.Subscribe(EvConfigEvServerNew, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerNew, func(sc *ServerConfig) {
+		g.mainlog().Debugf("event fired [%s] %s", EventConfigServerNew, sc.ListenInterface)
 		if _, err := g.findServer(sc.ListenInterface); err != nil {
 		if _, err := g.findServer(sc.ListenInterface); err != nil {
 			// not found, lets add it
 			// not found, lets add it
+			//
 			if err := g.makeServers(); err != nil {
 			if err := g.makeServers(); err != nil {
-				g.mainlog().WithError(err).Error("cannot add server [%s]", sc.ListenInterface)
+				g.mainlog().WithError(err).Errorf("cannot add server [%s]", sc.ListenInterface)
 				return
 				return
 			}
 			}
 			g.mainlog().Infof("New server added [%s]", sc.ListenInterface)
 			g.mainlog().Infof("New server added [%s]", sc.ListenInterface)
@@ -238,10 +266,12 @@ func (g *guerrilla) subscribeEvents() {
 					g.mainlog().WithError(err).Info("Event server_change:new_server returned errors when starting")
 					g.mainlog().WithError(err).Info("Event server_change:new_server returned errors when starting")
 				}
 				}
 			}
 			}
+		} else {
+			g.mainlog().Debugf("new event, but server already fund")
 		}
 		}
 	})
 	})
 	// start a server that already exists in the config and has been enabled
 	// start a server that already exists in the config and has been enabled
-	g.Subscribe(EvConfigServerStart, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerStart, func(sc *ServerConfig) {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 			if server.state == ServerStateStopped || server.state == ServerStateNew {
 			if server.state == ServerStateStopped || server.state == ServerStateNew {
 				g.mainlog().Infof("Starting server [%s]", server.listenInterface)
 				g.mainlog().Infof("Starting server [%s]", server.listenInterface)
@@ -253,7 +283,7 @@ func (g *guerrilla) subscribeEvents() {
 		}
 		}
 	})
 	})
 	// stop running a server
 	// stop running a server
-	g.Subscribe(EvConfigServerStop, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerStop, func(sc *ServerConfig) {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 			if server.state == ServerStateRunning {
 			if server.state == ServerStateRunning {
 				server.Shutdown()
 				server.Shutdown()
@@ -262,7 +292,7 @@ func (g *guerrilla) subscribeEvents() {
 		}
 		}
 	})
 	})
 	// server was removed from config
 	// server was removed from config
-	g.Subscribe(EvConfigServerRemove, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerRemove, func(sc *ServerConfig) {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 			server.Shutdown()
 			server.Shutdown()
 			g.removeServer(sc.ListenInterface)
 			g.removeServer(sc.ListenInterface)
@@ -271,7 +301,7 @@ func (g *guerrilla) subscribeEvents() {
 	})
 	})
 
 
 	// TLS changes
 	// TLS changes
-	g.Subscribe(EvConfigServerTLSConfig, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerTLSConfig, func(sc *ServerConfig) {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 			if err := server.configureSSL(); err == nil {
 			if err := server.configureSSL(); err == nil {
 				g.mainlog().Infof("Server [%s] new TLS configuration loaded", sc.ListenInterface)
 				g.mainlog().Infof("Server [%s] new TLS configuration loaded", sc.ListenInterface)
@@ -281,24 +311,26 @@ func (g *guerrilla) subscribeEvents() {
 		}
 		}
 	})
 	})
 	// when server's timeout change.
 	// when server's timeout change.
-	g.Subscribe(EvConfigServerTimeout, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerTimeout, func(sc *ServerConfig) {
 		g.mapServers(func(server *server) {
 		g.mapServers(func(server *server) {
 			server.setTimeout(sc.Timeout)
 			server.setTimeout(sc.Timeout)
 		})
 		})
 	})
 	})
 	// when server's max clients change.
 	// when server's max clients change.
-	g.Subscribe(EvConfigServerMaxClients, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerMaxClients, func(sc *ServerConfig) {
 		g.mapServers(func(server *server) {
 		g.mapServers(func(server *server) {
 			// TODO resize the pool somehow
 			// TODO resize the pool somehow
 		})
 		})
 	})
 	})
 	// when a server's log file changes
 	// when a server's log file changes
-	g.Subscribe(EvConfigServerLogFile, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerLogFile, func(sc *ServerConfig) {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 			var err error
 			var err error
 			var l log.Logger
 			var l log.Logger
-			if l, err = log.GetLogger(sc.LogFile); err == nil {
-				g.storeMainlog(l)
+			level := g.mainlog().GetLevel()
+			if l, err = log.GetLogger(sc.LogFile, level); err == nil {
+				g.setMainlog(l)
+				backends.Svc.SetMainlog(l)
 				// it will change to the new logger on the next accepted client
 				// it will change to the new logger on the next accepted client
 				server.logStore.Store(l)
 				server.logStore.Store(l)
 				g.mainlog().Infof("Server [%s] changed, new clients will log to: [%s]",
 				g.mainlog().Infof("Server [%s] changed, new clients will log to: [%s]",
@@ -315,15 +347,61 @@ func (g *guerrilla) subscribeEvents() {
 		}
 		}
 	})
 	})
 	// when the daemon caught a sighup, event for individual server
 	// when the daemon caught a sighup, event for individual server
-	g.Subscribe(EvConfigServerLogReopen, func(sc *ServerConfig) {
+	g.Subscribe(EventConfigServerLogReopen, func(sc *ServerConfig) {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
 		if server, err := g.findServer(sc.ListenInterface); err == nil {
-			server.log.Reopen()
+			server.log().Reopen()
 			g.mainlog().Infof("Server [%s] re-opened log file [%s]", sc.ListenInterface, sc.LogFile)
 			g.mainlog().Infof("Server [%s] re-opened log file [%s]", sc.ListenInterface, sc.LogFile)
 		}
 		}
 	})
 	})
+	// when the backend changes
+	g.Subscribe(EventConfigBackendConfig, func(appConfig *AppConfig) {
+		logger, _ := log.GetLogger(appConfig.LogFile, appConfig.LogLevel)
+		// shutdown the backend first.
+		var err error
+		if err = g.backend().Shutdown(); err != nil {
+			logger.WithError(err).Warn("Backend failed to shutdown")
+			return
+		}
+		// init a new backend, Revert to old backend config if it fails
+		if newBackend, newErr := backends.New(appConfig.BackendConfig, logger); newErr != nil {
+			logger.WithError(newErr).Error("Error while loading the backend")
+			err = g.backend().Reinitialize()
+			if err != nil {
+				logger.WithError(err).Fatal("failed to revert to old backend config")
+				return
+			}
+			err = g.backend().Start()
+			if err != nil {
+				logger.WithError(err).Fatal("failed to start backend with old config")
+				return
+			}
+			logger.Info("reverted to old backend config")
+		} else {
+			// swap to the bew backend (assuming old backend was shutdown so it can be safely swapped)
+			if err := newBackend.Start(); err != nil {
+				logger.WithError(err).Error("backend could not start")
+			}
+			logger.Info("new backend started")
+			g.storeBackend(newBackend)
+		}
+	})
 
 
 }
 }
 
 
+func (g *guerrilla) storeBackend(b backends.Backend) {
+	g.backendStore.Store(b)
+	g.mapServers(func(server *server) {
+		server.setBackend(b)
+	})
+}
+
+func (g *guerrilla) backend() backends.Backend {
+	if b, ok := g.backendStore.Load().(backends.Backend); ok {
+		return b
+	}
+	return nil
+}
+
 // Entry point for the application. Starts all servers.
 // Entry point for the application. Starts all servers.
 func (g *guerrilla) Start() error {
 func (g *guerrilla) Start() error {
 	var startErrors Errors
 	var startErrors Errors
@@ -335,13 +413,17 @@ func (g *guerrilla) Start() error {
 	if len(g.servers) == 0 {
 	if len(g.servers) == 0 {
 		return append(startErrors, errors.New("No servers to start, please check the config"))
 		return append(startErrors, errors.New("No servers to start, please check the config"))
 	}
 	}
+	if g.state == GuerrillaStateStopped {
+		// when a backend is shutdown, we need to re-initialize before it can be started again
+		g.backend().Reinitialize()
+		g.backend().Start()
+	}
 	// channel for reading errors
 	// channel for reading errors
 	errs := make(chan error, len(g.servers))
 	errs := make(chan error, len(g.servers))
 	var startWG sync.WaitGroup
 	var startWG sync.WaitGroup
 
 
 	// start servers, send any errors back to errs channel
 	// start servers, send any errors back to errs channel
 	for ListenInterface := range g.servers {
 	for ListenInterface := range g.servers {
-		g.mainlog().Infof("Starting: %s", ListenInterface)
 		if !g.servers[ListenInterface].isEnabled() {
 		if !g.servers[ListenInterface].isEnabled() {
 			// not enabled
 			// not enabled
 			continue
 			continue
@@ -352,6 +434,7 @@ func (g *guerrilla) Start() error {
 		}
 		}
 		startWG.Add(1)
 		startWG.Add(1)
 		go func(s *server) {
 		go func(s *server) {
+			g.mainlog().Infof("Starting: %s", s.listenInterface)
 			if err := s.Start(&startWG); err != nil {
 			if err := s.Start(&startWG); err != nil {
 				errs <- err
 				errs <- err
 			}
 			}
@@ -369,47 +452,56 @@ func (g *guerrilla) Start() error {
 	}
 	}
 	if len(startErrors) > 0 {
 	if len(startErrors) > 0 {
 		return startErrors
 		return startErrors
-	} else {
-		if gw, ok := g.backend.(*backends.BackendGateway); ok {
-			if gw.State == backends.BackendStateShuttered {
-				_ = gw.Reinitialize()
-			}
-		}
 	}
 	}
 	return nil
 	return nil
 }
 }
 
 
 func (g *guerrilla) Shutdown() {
 func (g *guerrilla) Shutdown() {
+
+	// shut down the servers first
+	g.mapServers(func(s *server) {
+		if s.state == ServerStateRunning {
+			s.Shutdown()
+			g.mainlog().Infof("shutdown completed for [%s]", s.listenInterface)
+		}
+	})
+
 	g.guard.Lock()
 	g.guard.Lock()
 	defer func() {
 	defer func() {
 		g.state = GuerrillaStateStopped
 		g.state = GuerrillaStateStopped
 		defer g.guard.Unlock()
 		defer g.guard.Unlock()
 	}()
 	}()
-	for ListenInterface, s := range g.servers {
-		if s.state == ServerStateRunning {
-			s.Shutdown()
-			g.mainlog().Infof("shutdown completed for [%s]", ListenInterface)
-		}
-	}
-	if err := g.backend.Shutdown(); err != nil {
+	if err := g.backend().Shutdown(); err != nil {
 		g.mainlog().WithError(err).Warn("Backend failed to shutdown")
 		g.mainlog().WithError(err).Warn("Backend failed to shutdown")
 	} else {
 	} else {
 		g.mainlog().Infof("Backend shutdown completed")
 		g.mainlog().Infof("Backend shutdown completed")
 	}
 	}
 }
 }
 
 
-func (g *guerrilla) Subscribe(topic Event, fn interface{}) error {
-	return g.bus.Subscribe(topic.String(), fn)
-}
-
-func (g *guerrilla) Publish(topic Event, args ...interface{}) {
-	g.bus.Publish(topic.String(), args...)
-}
-
-func (g *guerrilla) Unsubscribe(topic Event, handler interface{}) error {
-	return g.bus.Unsubscribe(topic.String(), handler)
+// SetLogger sets the logger for the app and propagates it to sub-packages (eg.
+func (g *guerrilla) SetLogger(l log.Logger) {
+	g.setMainlog(l)
+	backends.Svc.SetMainlog(l)
 }
 }
 
 
-func (g *guerrilla) SetLogger(l log.Logger) {
-	g.storeMainlog(l)
+// writePid writes the pid (process id) to the file specified in the config.
+// Won't write anything if no file specified
+func (g *guerrilla) writePid() error {
+	if len(g.Config.PidFile) > 0 {
+		if f, err := os.Create(g.Config.PidFile); err == nil {
+			defer f.Close()
+			pid := os.Getpid()
+			if _, err := f.WriteString(fmt.Sprintf("%d", pid)); err == nil {
+				f.Sync()
+				g.mainlog().Infof("pid_file (%s) written with pid:%v", g.Config.PidFile, pid)
+			} else {
+				g.mainlog().WithError(err).Errorf("Error while writing pidFile (%s)", g.Config.PidFile)
+				return err
+			}
+		} else {
+			g.mainlog().WithError(err).Errorf("Error while creating pidFile (%s)", g.Config.PidFile)
+			return err
+		}
+	}
+	return nil
 }
 }

+ 189 - 0
log/hook.go

@@ -0,0 +1,189 @@
+package log
+
+import (
+	"bufio"
+	log "github.com/Sirupsen/logrus"
+	"io"
+	"io/ioutil"
+	"os"
+	"strings"
+	"sync"
+)
+
+// custom logrus hook
+
+// hookMu ensures all io operations are synced. Always on exported functions
+var hookMu sync.Mutex
+
+// LoggerHook extends the log.Hook interface by adding Reopen() and Rename()
+type LoggerHook interface {
+	log.Hook
+	Reopen() error
+}
+type LogrusHook struct {
+	w io.Writer
+	// file descriptor, can be re-opened
+	fd *os.File
+	// filename to the file descriptor
+	fname string
+	// txtFormatter that doesn't use colors
+	plainTxtFormatter *log.TextFormatter
+
+	mu sync.Mutex
+}
+
+// newLogrusHook creates a new hook. dest can be a file name or one of the following strings:
+// "stderr" - log to stderr, lines will be written to os.Stdout
+// "stdout" - log to stdout, lines will be written to os.Stdout
+// "off" - no log, lines will be written to ioutil.Discard
+func NewLogrusHook(dest string) (LoggerHook, error) {
+	hookMu.Lock()
+	defer hookMu.Unlock()
+	hook := LogrusHook{fname: dest}
+	err := hook.setup(dest)
+	return &hook, err
+}
+
+type OutputOption int
+
+const (
+	OutputStderr OutputOption = 1 + iota
+	OutputStdout
+	OutputOff
+	OutputNull
+	OutputFile
+)
+
+var outputOptions = [...]string{
+	"stderr",
+	"stdout",
+	"off",
+	"",
+	"file",
+}
+
+func (o OutputOption) String() string {
+	return outputOptions[o-1]
+}
+
+func parseOutputOption(str string) OutputOption {
+	switch str {
+	case "stderr":
+		return OutputStderr
+	case "stdout":
+		return OutputStdout
+	case "off":
+		return OutputOff
+	case "":
+		return OutputNull
+	}
+	return OutputFile
+}
+
+// Setup sets the hook's writer w and file descriptor fd
+// assumes the hook.fd is closed and nil
+func (hook *LogrusHook) setup(dest string) error {
+
+	out := parseOutputOption(dest)
+	if out == OutputNull || out == OutputStderr {
+		hook.w = os.Stderr
+	} else if out == OutputStdout {
+		hook.w = os.Stdout
+	} else if out == OutputOff {
+		hook.w = ioutil.Discard
+	} else {
+		if _, err := os.Stat(dest); err == nil {
+			// file exists open the file for appending
+			if err := hook.openAppend(dest); err != nil {
+				return err
+			}
+		} else {
+			// create the file
+			if err := hook.openCreate(dest); err != nil {
+				return err
+			}
+		}
+	}
+	// disable colors when writing to file
+	if hook.fd != nil {
+		hook.plainTxtFormatter = &log.TextFormatter{DisableColors: true}
+	}
+	return nil
+}
+
+// openAppend opens the dest file for appending. Default to os.Stderr if it can't open dest
+func (hook *LogrusHook) openAppend(dest string) (err error) {
+	fd, err := os.OpenFile(dest, os.O_APPEND|os.O_WRONLY, 0644)
+	if err != nil {
+		log.WithError(err).Error("Could not open log file for appending")
+		hook.w = os.Stderr
+		hook.fd = nil
+		return
+	}
+	hook.w = bufio.NewWriter(fd)
+	hook.fd = fd
+	return
+}
+
+// openCreate creates a new dest file for appending. Default to os.Stderr if it can't open dest
+func (hook *LogrusHook) openCreate(dest string) (err error) {
+	fd, err := os.OpenFile(dest, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
+	if err != nil {
+		log.WithError(err).Error("Could not create log file")
+		hook.w = os.Stderr
+		hook.fd = nil
+		return
+	}
+	hook.w = bufio.NewWriter(fd)
+	hook.fd = fd
+	return
+}
+
+// Fire implements the logrus Hook interface. It disables color text formatting if writing to a file
+func (hook *LogrusHook) Fire(entry *log.Entry) error {
+	hookMu.Lock()
+	defer hookMu.Unlock()
+	if line, err := entry.String(); err == nil {
+		r := strings.NewReader(line)
+		if _, err = io.Copy(hook.w, r); err != nil {
+			return err
+		}
+		if wb, ok := hook.w.(*bufio.Writer); ok {
+			if err := wb.Flush(); err != nil {
+				return err
+			}
+			if hook.fd != nil {
+				hook.fd.Sync()
+			}
+		}
+		return err
+	} else {
+		return err
+	}
+}
+
+// Levels implements the logrus Hook interface
+func (hook *LogrusHook) Levels() []log.Level {
+	return log.AllLevels
+}
+
+// Reopen closes and re-open log file descriptor, which is a special feature of this hook
+func (hook *LogrusHook) Reopen() error {
+	hookMu.Lock()
+	defer hookMu.Unlock()
+	var err error
+	if hook.fd != nil {
+		if err = hook.fd.Close(); err != nil {
+			return err
+		}
+		// The file could have been re-named by an external program such as logrotate(8)
+		if _, err := os.Stat(hook.fname); err != nil {
+			// The file doesn't exist, create a new one.
+			return hook.openCreate(hook.fname)
+		} else {
+			return hook.openAppend(hook.fname)
+		}
+	}
+	return err
+
+}

+ 102 - 210
log/log.go

@@ -1,16 +1,56 @@
 package log
 package log
 
 
 import (
 import (
-	"bufio"
 	log "github.com/Sirupsen/logrus"
 	log "github.com/Sirupsen/logrus"
 	"io"
 	"io"
 	"io/ioutil"
 	"io/ioutil"
 	"net"
 	"net"
 	"os"
 	"os"
-	"strings"
 	"sync"
 	"sync"
 )
 )
 
 
+// The following are taken from logrus
+const (
+	// PanicLevel level, highest level of severity. Logs and then calls panic with the
+	// message passed to Debug, Info, ...
+	PanicLevel Level = iota
+	// FatalLevel level. Logs and then calls `os.Exit(1)`. It will exit even if the
+	// logging level is set to Panic.
+	FatalLevel
+	// ErrorLevel level. Logs. Used for errors that should definitely be noted.
+	// Commonly used for hooks to send errors to an error tracking service.
+	ErrorLevel
+	// WarnLevel level. Non-critical entries that deserve eyes.
+	WarnLevel
+	// InfoLevel level. General operational entries about what's going on inside the
+	// application.
+	InfoLevel
+	// DebugLevel level. Usually only enabled when debugging. Very verbose logging.
+	DebugLevel
+)
+
+type Level uint8
+
+// Convert the Level to a string. E.g. PanicLevel becomes "panic".
+func (level Level) String() string {
+	switch level {
+	case DebugLevel:
+		return "debug"
+	case InfoLevel:
+		return "info"
+	case WarnLevel:
+		return "warning"
+	case ErrorLevel:
+		return "error"
+	case FatalLevel:
+		return "fatal"
+	case PanicLevel:
+		return "panic"
+	}
+
+	return "unknown"
+}
+
 type Logger interface {
 type Logger interface {
 	log.FieldLogger
 	log.FieldLogger
 	WithConn(conn net.Conn) *log.Entry
 	WithConn(conn net.Conn) *log.Entry
@@ -30,9 +70,18 @@ type HookedLogger struct {
 	*log.Logger
 	*log.Logger
 
 
 	h LoggerHook
 	h LoggerHook
+
+	// destination, file name or "stderr", "stdout" or "off"
+	dest string
+
+	oo OutputOption
+}
+
+type loggerKey struct {
+	dest, level string
 }
 }
 
 
-type loggerCache map[string]Logger
+type loggerCache map[loggerKey]Logger
 
 
 // loggers store the cached loggers created by NewLogger
 // loggers store the cached loggers created by NewLogger
 var loggers struct {
 var loggers struct {
@@ -52,27 +101,34 @@ var loggers struct {
 // Each Logger returned is cached on dest, subsequent call will get the cached logger if dest matches
 // Each Logger returned is cached on dest, subsequent call will get the cached logger if dest matches
 // If there was an error, the log will revert to stderr instead of using a custom hook
 // If there was an error, the log will revert to stderr instead of using a custom hook
 
 
-func GetLogger(dest string) (Logger, error) {
+func GetLogger(dest string, level string) (Logger, error) {
 	loggers.Lock()
 	loggers.Lock()
 	defer loggers.Unlock()
 	defer loggers.Unlock()
+	key := loggerKey{dest, level}
 	if loggers.cache == nil {
 	if loggers.cache == nil {
 		loggers.cache = make(loggerCache, 1)
 		loggers.cache = make(loggerCache, 1)
 	} else {
 	} else {
-		if l, ok := loggers.cache[dest]; ok {
+		if l, ok := loggers.cache[key]; ok {
 			// return the one we found in the cache
 			// return the one we found in the cache
 			return l, nil
 			return l, nil
 		}
 		}
 	}
 	}
-	logrus := log.New()
-	// we'll use the hook to output instead
-	logrus.Out = ioutil.Discard
-
-	l := &HookedLogger{}
+	o := parseOutputOption(dest)
+	logrus, err := newLogrus(o, level)
+	if err != nil {
+		return nil, err
+	}
+	l := &HookedLogger{dest: dest}
 	l.Logger = logrus
 	l.Logger = logrus
 
 
 	// cache it
 	// cache it
-	loggers.cache[dest] = l
+	loggers.cache[key] = l
 
 
+	if o != OutputFile {
+		return l, nil
+	}
+	// we'll use the hook to output instead
+	logrus.Out = ioutil.Discard
 	// setup the hook
 	// setup the hook
 	if h, err := NewLogrusHook(dest); err != nil {
 	if h, err := NewLogrusHook(dest); err != nil {
 		// revert back to stderr
 		// revert back to stderr
@@ -87,6 +143,36 @@ func GetLogger(dest string) (Logger, error) {
 
 
 }
 }
 
 
+func newLogrus(o OutputOption, level string) (*log.Logger, error) {
+	logLevel, err := log.ParseLevel(level)
+	if err != nil {
+		return nil, err
+	}
+	var out io.Writer
+
+	if o != OutputFile {
+		if o == OutputNull || o == OutputStderr {
+			out = os.Stderr
+		} else if o == OutputStdout {
+			out = os.Stdout
+		} else if o == OutputOff {
+			out = ioutil.Discard
+		}
+	} else {
+		// we'll use a hook to output instead
+		out = ioutil.Discard
+	}
+
+	logger := &log.Logger{
+		Out:       out,
+		Formatter: new(log.TextFormatter),
+		Hooks:     make(log.LevelHooks),
+		Level:     logLevel,
+	}
+
+	return logger, nil
+}
+
 // AddHook adds a new logrus hook
 // AddHook adds a new logrus hook
 func (l *HookedLogger) AddHook(h log.Hook) {
 func (l *HookedLogger) AddHook(h log.Hook) {
 	log.AddHook(h)
 	log.AddHook(h)
@@ -103,7 +189,6 @@ func (l *HookedLogger) SetLevel(level string) {
 	if logLevel, err = log.ParseLevel(level); err != nil {
 	if logLevel, err = log.ParseLevel(level); err != nil {
 		return
 		return
 	}
 	}
-	l.Level = logLevel
 	log.SetLevel(logLevel)
 	log.SetLevel(logLevel)
 }
 }
 
 
@@ -114,12 +199,15 @@ func (l *HookedLogger) GetLevel() string {
 
 
 // Reopen closes the log file and re-opens it
 // Reopen closes the log file and re-opens it
 func (l *HookedLogger) Reopen() error {
 func (l *HookedLogger) Reopen() error {
+	if l.h == nil {
+		return nil
+	}
 	return l.h.Reopen()
 	return l.h.Reopen()
 }
 }
 
 
-// Fgetname Gets the file name
+// GetLogDest Gets the file name
 func (l *HookedLogger) GetLogDest() string {
 func (l *HookedLogger) GetLogDest() string {
-	return l.h.GetLogDest()
+	return l.dest
 }
 }
 
 
 // WithConn extends logrus to be able to log with a net.Conn
 // WithConn extends logrus to be able to log with a net.Conn
@@ -131,199 +219,3 @@ func (l *HookedLogger) WithConn(conn net.Conn) *log.Entry {
 	}
 	}
 	return l.WithField("addr", addr)
 	return l.WithField("addr", addr)
 }
 }
-
-// custom logrus hook
-
-// hookMu ensures all io operations are synced. Always on exported functions
-var hookMu sync.Mutex
-
-// LoggerHook extends the log.Hook interface by adding Reopen() and Rename()
-type LoggerHook interface {
-	log.Hook
-	Reopen() error
-	GetLogDest() string
-}
-type LogrusHook struct {
-	w io.Writer
-	// file descriptor, can be re-opened
-	fd *os.File
-	// filename to the file descriptor
-	fname string
-	// txtFormatter that doesn't use colors
-	plainTxtFormatter *log.TextFormatter
-
-	mu sync.Mutex
-}
-
-// newLogrusHook creates a new hook. dest can be a file name or one of the following strings:
-// "stderr" - log to stderr, lines will be written to os.Stdout
-// "stdout" - log to stdout, lines will be written to os.Stdout
-// "off" - no log, lines will be written to ioutil.Discard
-func NewLogrusHook(dest string) (LoggerHook, error) {
-	hookMu.Lock()
-	defer hookMu.Unlock()
-	hook := LogrusHook{fname: dest}
-	err := hook.setup(dest)
-	return &hook, err
-}
-
-type OutputOption int
-
-const (
-	OutputStderr OutputOption = 1 + iota
-	OutputStdout
-	OutputOff
-	OutputNull
-	OutputFile
-)
-
-var outputOptions = [...]string{
-	"stderr",
-	"stdout",
-	"off",
-	"",
-	"file",
-}
-
-func (o OutputOption) String() string {
-	return outputOptions[o-1]
-}
-
-func parseOutputOption(str string) OutputOption {
-	switch str {
-	case "stderr":
-		return OutputStderr
-	case "stdout":
-		return OutputStdout
-	case "off":
-		return OutputOff
-	case "":
-		return OutputNull
-	}
-	return OutputFile
-}
-
-// Setup sets the hook's writer w and file descriptor fd
-// assumes the hook.fd is closed and nil
-func (hook *LogrusHook) setup(dest string) error {
-
-	out := parseOutputOption(dest)
-	if out == OutputNull || out == OutputStderr {
-		hook.w = os.Stderr
-	} else if out == OutputStdout {
-		hook.w = os.Stdout
-	} else if out == OutputOff {
-		hook.w = ioutil.Discard
-	} else {
-		if _, err := os.Stat(dest); err == nil {
-			// file exists open the file for appending
-			if err := hook.openAppend(dest); err != nil {
-				return err
-			}
-		} else {
-			// create the file
-			if err := hook.openCreate(dest); err != nil {
-				return err
-			}
-		}
-	}
-	// disable colors when writing to file
-	if hook.fd != nil {
-		hook.plainTxtFormatter = &log.TextFormatter{DisableColors: true}
-	}
-	return nil
-}
-
-// openAppend opens the dest file for appending. Default to os.Stderr if it can't open dest
-func (hook *LogrusHook) openAppend(dest string) (err error) {
-	fd, err := os.OpenFile(dest, os.O_APPEND|os.O_WRONLY, 0644)
-	if err != nil {
-		log.WithError(err).Error("Could not open log file for appending")
-		hook.w = os.Stderr
-		hook.fd = nil
-		return
-	}
-	hook.w = bufio.NewWriter(fd)
-	hook.fd = fd
-	return
-}
-
-// openCreate creates a new dest file for appending. Default to os.Stderr if it can't open dest
-func (hook *LogrusHook) openCreate(dest string) (err error) {
-	fd, err := os.OpenFile(dest, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
-	if err != nil {
-		log.WithError(err).Error("Could not create log file")
-		hook.w = os.Stderr
-		hook.fd = nil
-		return
-	}
-	hook.w = bufio.NewWriter(fd)
-	hook.fd = fd
-	return
-}
-
-// Fire implements the logrus Hook interface. It disables color text formatting if writing to a file
-func (hook *LogrusHook) Fire(entry *log.Entry) error {
-	hookMu.Lock()
-	defer hookMu.Unlock()
-	if hook.fd != nil {
-		// save the old hook
-		oldhook := entry.Logger.Formatter
-		defer func() {
-			// set the back to the old hook after we're done
-			entry.Logger.Formatter = oldhook
-		}()
-		// use the plain text hook
-		entry.Logger.Formatter = hook.plainTxtFormatter
-	}
-	if line, err := entry.String(); err == nil {
-		r := strings.NewReader(line)
-		if _, err = io.Copy(hook.w, r); err != nil {
-			return err
-		}
-		if wb, ok := hook.w.(*bufio.Writer); ok {
-			if err := wb.Flush(); err != nil {
-				return err
-			}
-			if hook.fd != nil {
-				hook.fd.Sync()
-			}
-		}
-		return err
-	} else {
-		return err
-	}
-}
-
-// GetLogDest returns the destination of the log as a string
-func (hook *LogrusHook) GetLogDest() string {
-	hookMu.Lock()
-	defer hookMu.Unlock()
-	return hook.fname
-}
-
-// Levels implements the logrus Hook interface
-func (hook *LogrusHook) Levels() []log.Level {
-	return log.AllLevels
-}
-
-// Reopen closes and re-open log file descriptor, which is a special feature of this hook
-func (hook *LogrusHook) Reopen() error {
-	hookMu.Lock()
-	defer hookMu.Unlock()
-	var err error
-	if hook.fd != nil {
-		if err = hook.fd.Close(); err != nil {
-			return err
-		}
-		// The file could have been re-named by an external program such as logrotate(8)
-		if _, err := os.Stat(hook.fname); err != nil {
-			// The file doesn't exist,create a new one.
-			return hook.openCreate(hook.fname)
-		} else {
-			return hook.openAppend(hook.fname)
-		}
-	}
-	return err
-
-}

+ 354 - 0
mail/envelope.go

@@ -0,0 +1,354 @@
+package mail
+
+import (
+	"bufio"
+	"bytes"
+	"crypto/md5"
+	"encoding/base64"
+	"errors"
+	"fmt"
+	"gopkg.in/iconv.v1"
+	"io"
+	"io/ioutil"
+	"mime/quotedprintable"
+	"net/mail"
+	"net/textproto"
+	"regexp"
+	"strings"
+	"sync"
+	"time"
+)
+
+const maxHeaderChunk = 1 + (3 << 10) // 3KB
+
+// Address encodes an email address of the form `<user@host>`
+type Address struct {
+	User string
+	Host string
+}
+
+func (ep *Address) String() string {
+	return fmt.Sprintf("%s@%s", ep.User, ep.Host)
+}
+
+func (ep *Address) IsEmpty() bool {
+	return ep.User == "" && ep.Host == ""
+}
+
+var ap = mail.AddressParser{}
+
+// NewAddress takes a string of an RFC 5322 address of the
+// form "Gogh Fir <[email protected]>" or "[email protected]".
+func NewAddress(str string) (Address, error) {
+	a, err := ap.Parse(str)
+	if err != nil {
+		return Address{}, err
+	}
+	pos := strings.Index(a.Address, "@")
+	if pos > 0 {
+		return Address{
+				User: a.Address[0:pos],
+				Host: a.Address[pos+1:],
+			},
+			nil
+	}
+	return Address{}, errors.New("invalid address")
+}
+
+// Email represents a single SMTP message.
+type Envelope struct {
+	// Remote IP address
+	RemoteIP string
+	// Message sent in EHLO command
+	Helo string
+	// Sender
+	MailFrom Address
+	// Recipients
+	RcptTo []Address
+	// Data stores the header and message body
+	Data bytes.Buffer
+	// Subject stores the subject of the email, extracted and decoded after calling ParseHeaders()
+	Subject string
+	// TLS is true if the email was received using a TLS connection
+	TLS bool
+	// Header stores the results from ParseHeaders()
+	Header textproto.MIMEHeader
+	// Values hold the values generated when processing the envelope by the backend
+	Values map[string]interface{}
+	// Hashes of each email on the rcpt
+	Hashes []string
+	// additional delivery header that may be added
+	DeliveryHeader string
+	// Email(s) will be queued with this id
+	QueuedId string
+	// When locked, it means that the envelope is being processed by the backend
+	sync.Mutex
+}
+
+func NewEnvelope(remoteAddr string, clientID uint64) *Envelope {
+	return &Envelope{
+		RemoteIP: remoteAddr,
+		Values:   make(map[string]interface{}),
+		QueuedId: queuedID(clientID),
+	}
+}
+
+func queuedID(clientID uint64) string {
+	return fmt.Sprintf("%x", md5.Sum([]byte(string(time.Now().Unix())+string(clientID))))
+}
+
+// ParseHeaders parses the headers into Header field of the Envelope struct.
+// Data buffer must be full before calling.
+// It assumes that at most 30kb of email data can be a header
+// Decoding of encoding to UTF is only done on the Subject, where the result is assigned to the Subject field
+func (e *Envelope) ParseHeaders() error {
+	var err error
+	if e.Header != nil {
+		return errors.New("Headers already parsed")
+	}
+	buf := bytes.NewBuffer(e.Data.Bytes())
+	// find where the header ends, assuming that over 30 kb would be max
+	max := maxHeaderChunk
+	if buf.Len() < max {
+		max = buf.Len()
+	}
+	// read in the chunk which we'll scan for the header
+	chunk := make([]byte, max)
+	buf.Read(chunk)
+	headerEnd := strings.Index(string(chunk), "\n\n") // the first two new-lines chars are the End Of Header
+	if headerEnd > -1 {
+		header := chunk[0:headerEnd]
+		headerReader := textproto.NewReader(bufio.NewReader(bytes.NewBuffer(header)))
+		e.Header, err = headerReader.ReadMIMEHeader()
+		if err != nil {
+			// decode the subject
+			if subject, ok := e.Header["Subject"]; ok {
+				e.Subject = MimeHeaderDecode(subject[0])
+			}
+		}
+	} else {
+		err = errors.New("header not found")
+	}
+	return err
+}
+
+// Len returns the number of bytes that would be in the reader returned by NewReader()
+func (e *Envelope) Len() int {
+	return len(e.DeliveryHeader) + e.Data.Len()
+}
+
+// Returns a new reader for reading the email contents, including the delivery headers
+func (e *Envelope) NewReader() io.Reader {
+	return io.MultiReader(
+		strings.NewReader(e.DeliveryHeader),
+		bytes.NewReader(e.Data.Bytes()),
+	)
+}
+
+// String converts the email to string.
+// Typically, you would want to use the compressor guerrilla.Processor for more efficiency, or use NewReader
+func (e *Envelope) String() string {
+	return e.DeliveryHeader + e.Data.String()
+}
+
+// ResetTransaction is called when the transaction is reset (keeping the connection open)
+func (e *Envelope) ResetTransaction() {
+	e.MailFrom = Address{}
+	e.RcptTo = []Address{}
+	// reset the data buffer, keep it allocated
+	e.Data.Reset()
+
+	// todo: these are probably good candidates for buffers / use sync.Pool (after profiling)
+	e.Subject = ""
+	e.Header = nil
+	e.Hashes = make([]string, 0)
+	e.DeliveryHeader = ""
+	e.Values = make(map[string]interface{})
+}
+
+// Seed is called when used with a new connection, once it's accepted
+func (e *Envelope) Reseed(RemoteIP string, clientID uint64) {
+	e.RemoteIP = RemoteIP
+	e.QueuedId = queuedID(clientID)
+	e.Helo = ""
+	e.TLS = false
+}
+
+// PushRcpt adds a recipient email address to the envelope
+func (e *Envelope) PushRcpt(addr Address) {
+	e.RcptTo = append(e.RcptTo, addr)
+}
+
+// Pop removes the last email address that was pushed to the envelope
+func (e *Envelope) PopRcpt() Address {
+	ret := e.RcptTo[len(e.RcptTo)-1]
+	e.RcptTo = e.RcptTo[:len(e.RcptTo)-1]
+	return ret
+}
+
+var mimeRegex, _ = regexp.Compile(`=\?(.+?)\?([QBqp])\?(.+?)\?=`)
+
+// Decode strings in Mime header format
+// eg. =?ISO-2022-JP?B?GyRCIVo9dztSOWJAOCVBJWMbKEI=?=
+// This function uses GNU iconv under the hood, for more charset support than in Go's library
+func MimeHeaderDecode(str string) string {
+
+	matched := mimeRegex.FindAllStringSubmatch(str, -1)
+	var charset, encoding, payload string
+	if matched != nil {
+		for i := 0; i < len(matched); i++ {
+			if len(matched[i]) > 2 {
+				charset = matched[i][1]
+				encoding = strings.ToUpper(matched[i][2])
+				payload = matched[i][3]
+				switch encoding {
+				case "B":
+					str = strings.Replace(
+						str,
+						matched[i][0],
+						MailTransportDecode(payload, "base64", charset),
+						1)
+				case "Q":
+					str = strings.Replace(
+						str,
+						matched[i][0],
+						MailTransportDecode(payload, "quoted-printable", charset),
+						1)
+				}
+			}
+		}
+	}
+	return str
+}
+
+// decode from 7bit to 8bit UTF-8
+// encodingType can be "base64" or "quoted-printable"
+func MailTransportDecode(str string, encodingType string, charset string) string {
+	if charset == "" {
+		charset = "UTF-8"
+	} else {
+		charset = strings.ToUpper(charset)
+	}
+	if encodingType == "base64" {
+		str = fromBase64(str)
+	} else if encodingType == "quoted-printable" {
+		str = fromQuotedP(str)
+	}
+
+	if charset != "UTF-8" {
+		charset = fixCharset(charset)
+		// iconv is pretty good at what it does
+		if cd, err := iconv.Open("UTF-8", charset); err == nil {
+			defer func() {
+				cd.Close()
+				if r := recover(); r != nil {
+					//logln(1, fmt.Sprintf("Recovered in %v", r))
+				}
+			}()
+			// eg. charset can be "ISO-2022-JP"
+			return cd.ConvString(str)
+		}
+
+	}
+	return str
+}
+
+func fromBase64(data string) string {
+	buf := bytes.NewBufferString(data)
+	decoder := base64.NewDecoder(base64.StdEncoding, buf)
+	res, _ := ioutil.ReadAll(decoder)
+	return string(res)
+}
+
+func fromQuotedP(data string) string {
+	res, _ := ioutil.ReadAll(quotedprintable.NewReader(strings.NewReader(data)))
+	return string(res)
+}
+
+var charsetRegex, _ = regexp.Compile(`[_:.\/\\]`)
+
+func fixCharset(charset string) string {
+	fixed_charset := charsetRegex.ReplaceAllString(charset, "-")
+	// Fix charset
+	// borrowed from http://squirrelmail.svn.sourceforge.net/viewvc/squirrelmail/trunk/squirrelmail/include/languages.php?revision=13765&view=markup
+	// OE ks_c_5601_1987 > cp949
+	fixed_charset = strings.Replace(fixed_charset, "ks-c-5601-1987", "cp949", -1)
+	// Moz x-euc-tw > euc-tw
+	fixed_charset = strings.Replace(fixed_charset, "x-euc", "euc", -1)
+	// Moz x-windows-949 > cp949
+	fixed_charset = strings.Replace(fixed_charset, "x-windows_", "cp", -1)
+	// windows-125x and cp125x charsets
+	fixed_charset = strings.Replace(fixed_charset, "windows-", "cp", -1)
+	// ibm > cp
+	fixed_charset = strings.Replace(fixed_charset, "ibm", "cp", -1)
+	// iso-8859-8-i -> iso-8859-8
+	fixed_charset = strings.Replace(fixed_charset, "iso-8859-8-i", "iso-8859-8", -1)
+	if charset != fixed_charset {
+		return fixed_charset
+	}
+	return charset
+}
+
+// Envelopes have their own pool
+
+type Pool struct {
+	// envelopes that are ready to be borrowed
+	pool chan *Envelope
+	// semaphore to control number of maximum borrowed envelopes
+	sem chan bool
+}
+
+func NewPool(poolSize int) *Pool {
+	return &Pool{
+		pool: make(chan *Envelope, poolSize),
+		sem:  make(chan bool, poolSize),
+	}
+}
+
+func (p *Pool) Borrow(remoteAddr string, clientID uint64) *Envelope {
+	var e *Envelope
+	p.sem <- true // block the envelope until more room
+	select {
+	case e = <-p.pool:
+		e.Reseed(remoteAddr, clientID)
+	default:
+		e = NewEnvelope(remoteAddr, clientID)
+	}
+	return e
+}
+
+// Return returns an envelope back to the envelope pool
+// Note that an envelope will not be recycled while it still is
+// processing
+func (p *Pool) Return(e *Envelope) {
+	// we down't want to recycle an envelope that may still be processing
+	isUnlocked := func() <-chan bool {
+		signal := make(chan bool)
+		// make sure envelope finished processing
+		go func() {
+			// lock will block if still processing
+			e.Lock()
+			// got the lock, it means processing finished
+			e.Unlock()
+			// generate a signal
+			signal <- true
+		}()
+		return signal
+	}()
+
+	select {
+	case <-time.After(time.Second * 30):
+		// envelope still processing, we can't recycle it.
+	case <-isUnlocked:
+		// The envelope was _unlocked_, it finished processing
+		// put back in the pool or destroy
+		select {
+		case p.pool <- e:
+			//placed envelope back in pool
+		default:
+			// pool is full, don't return
+		}
+	}
+	// take a value off the semaphore to make room for more envelopes
+	<-p.sem
+}

+ 61 - 0
mail/envelope_test.go

@@ -0,0 +1,61 @@
+package mail
+
+import (
+	"io/ioutil"
+	"strings"
+	"testing"
+)
+
+func TestMimeHeaderDecode(t *testing.T) {
+	str := MimeHeaderDecode("=?ISO-2022-JP?B?GyRCIVo9dztSOWJAOCVBJWMbKEI=?=")
+	if i := strings.Index(str, "【女子高生チャ"); i != 0 {
+		t.Error("expecting 【女子高生チャ, got:", str)
+	}
+	str = MimeHeaderDecode("=?ISO-8859-1?Q?Andr=E9?= Pirard <[email protected]>")
+	if strings.Index(str, "André Pirard") != 0 {
+		t.Error("expecting André Pirard, got:", str)
+	}
+}
+func TestNewAddress(t *testing.T) {
+
+	addr, err := NewAddress("<hoop>")
+	if err == nil {
+		t.Error("there should be an error:", addr)
+	}
+
+	addr, err = NewAddress(`Gogh Fir <[email protected]>`)
+	if err != nil {
+		t.Error("there should be no error:", addr.Host, err)
+	}
+}
+func TestEnvelope(t *testing.T) {
+	e := NewEnvelope("127.0.0.1", 22)
+
+	e.QueuedId = "abc123"
+	e.Helo = "helo.example.com"
+	e.MailFrom = Address{User: "test", Host: "example.com"}
+	e.TLS = true
+	e.RemoteIP = "222.111.233.121"
+	to := Address{User: "test", Host: "example.com"}
+	e.PushRcpt(to)
+	if to.String() != "[email protected]" {
+		t.Error("to does not equal [email protected], it was:", to.String())
+	}
+	e.Data.WriteString("Subject: Test\n\nThis is a test nbnb nbnb hgghgh nnnbnb nbnbnb nbnbn.")
+
+	addHead := "Delivered-To: " + to.String() + "\n"
+	addHead += "Received: from " + e.Helo + " (" + e.Helo + "  [" + e.RemoteIP + "])\n"
+	e.DeliveryHeader = addHead
+
+	r := e.NewReader()
+
+	data, _ := ioutil.ReadAll(r)
+	if len(data) != e.Len() {
+		t.Error("e.Len() is inccorrect, it shown ", e.Len(), " but we wanted ", len(data))
+	}
+	e.ParseHeaders()
+	if e.Subject != "Test" {
+		t.Error("Subject expecting: Test, got:", e.Subject)
+	}
+
+}

+ 7 - 5
pool.go

@@ -3,6 +3,7 @@ package guerrilla
 import (
 import (
 	"errors"
 	"errors"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
 	"net"
 	"net"
 	"sync"
 	"sync"
 	"sync/atomic"
 	"sync/atomic"
@@ -18,7 +19,7 @@ type Poolable interface {
 	// ability to set read/write timeout
 	// ability to set read/write timeout
 	setTimeout(t time.Duration)
 	setTimeout(t time.Duration)
 	// set a new connection and client id
 	// set a new connection and client id
-	init(c net.Conn, clientID uint64)
+	init(c net.Conn, clientID uint64, ep *mail.Pool)
 	// get a unique id
 	// get a unique id
 	getID() uint64
 	getID() uint64
 }
 }
@@ -121,7 +122,7 @@ func (p *Pool) GetActiveClientsCount() int {
 }
 }
 
 
 // Borrow a Client from the pool. Will block if len(activeClients) > maxClients
 // Borrow a Client from the pool. Will block if len(activeClients) > maxClients
-func (p *Pool) Borrow(conn net.Conn, clientID uint64, logger log.Logger) (Poolable, error) {
+func (p *Pool) Borrow(conn net.Conn, clientID uint64, logger log.Logger, ep *mail.Pool) (Poolable, error) {
 	p.poolGuard.Lock()
 	p.poolGuard.Lock()
 	defer p.poolGuard.Unlock()
 	defer p.poolGuard.Unlock()
 
 
@@ -134,9 +135,9 @@ func (p *Pool) Borrow(conn net.Conn, clientID uint64, logger log.Logger) (Poolab
 	case p.sem <- true: // block the client from serving until there is room
 	case p.sem <- true: // block the client from serving until there is room
 		select {
 		select {
 		case c = <-p.pool:
 		case c = <-p.pool:
-			c.init(conn, clientID)
+			c.init(conn, clientID, ep)
 		default:
 		default:
-			c = NewClient(conn, clientID, logger)
+			c = NewClient(conn, clientID, logger, ep)
 		}
 		}
 		p.activeClientsAdd(c)
 		p.activeClientsAdd(c)
 
 
@@ -149,12 +150,13 @@ func (p *Pool) Borrow(conn net.Conn, clientID uint64, logger log.Logger) (Poolab
 
 
 // Return returns a Client back to the pool.
 // Return returns a Client back to the pool.
 func (p *Pool) Return(c Poolable) {
 func (p *Pool) Return(c Poolable) {
+	p.activeClientsRemove(c)
 	select {
 	select {
 	case p.pool <- c:
 	case p.pool <- c:
 	default:
 	default:
 		// hasta la vista, baby...
 		// hasta la vista, baby...
 	}
 	}
-	p.activeClientsRemove(c)
+
 	<-p.sem // make room for the next serving client
 	<-p.sem // make room for the next serving client
 }
 }
 
 

+ 8 - 1
response/enhanced.go

@@ -134,6 +134,7 @@ type Responses struct {
 	FailBackendNotRunning        string
 	FailBackendNotRunning        string
 	FailBackendTransaction       string
 	FailBackendTransaction       string
 	FailBackendTimeout           string
 	FailBackendTimeout           string
+	FailRcptCmd                  string
 
 
 	// The 400's
 	// The 400's
 	ErrorTooManyRecipients string
 	ErrorTooManyRecipients string
@@ -155,7 +156,6 @@ type Responses struct {
 // Called automatically during package load to build up the Responses struct
 // Called automatically during package load to build up the Responses struct
 func init() {
 func init() {
 
 
-	// There's even a Wikipedia page for canned responses: https://en.wikipedia.org/wiki/Canned_response
 	Canned = Responses{}
 	Canned = Responses{}
 
 
 	Canned.FailLineTooLong = (&Response{
 	Canned.FailLineTooLong = (&Response{
@@ -337,6 +337,13 @@ func init() {
 		Comment:      "Error: transaction timeout",
 		Comment:      "Error: transaction timeout",
 	}).String()
 	}).String()
 
 
+	Canned.FailRcptCmd = (&Response{
+		EnhancedCode: BadDestinationMailboxAddress,
+		BasicCode:    550,
+		Class:        ClassPermanentFailure,
+		Comment:      "User unknown in local recipient table",
+	}).String()
+
 }
 }
 
 
 // DefaultMap contains defined default codes (RfC 3463)
 // DefaultMap contains defined default codes (RfC 3463)

+ 1 - 1
response/quote.go

@@ -33,7 +33,7 @@ var quotes = struct {
 		"214-The Dude: No, you're not wrong Walter, you're just an ass-hole." +
 		"214-The Dude: No, you're not wrong Walter, you're just an ass-hole." +
 		"214 Walter Sobchak: Okay then.",
 		"214 Walter Sobchak: Okay then.",
 	14: "214-Private Snoop: you see what happens lebowski?" + CRLF +
 	14: "214-Private Snoop: you see what happens lebowski?" + CRLF +
-		"214-The Dude: nobody calls me lebowski, you got the wrong guy, I'm the the dude, man." + CRLF +
+		"214-The Dude: nobody calls me lebowski, you got the wrong guy, I'm the dude, man." + CRLF +
 		"214-Private Snoop: Your name's Lebowski, Lebowski. Your wife is Bunny." + CRLF +
 		"214-Private Snoop: Your name's Lebowski, Lebowski. Your wife is Bunny." + CRLF +
 		"214-The Dude: My wife? Bunny? Do you see a wedding ring on my finger? " + CRLF +
 		"214-The Dude: My wife? Bunny? Do you see a wedding ring on my finger? " + CRLF +
 		"214 Does this place look like I'm f**kin married? The toilet seat's up man!",
 		"214 Does this place look like I'm f**kin married? The toilet seat's up man!",

+ 91 - 67
server.go

@@ -6,15 +6,14 @@ import (
 	"fmt"
 	"fmt"
 	"io"
 	"io"
 	"net"
 	"net"
-	"runtime"
 	"strings"
 	"strings"
 	"sync"
 	"sync"
 	"sync/atomic"
 	"sync/atomic"
 	"time"
 	"time"
 
 
 	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/backends"
-	"github.com/flashmob/go-guerrilla/envelope"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
 	"github.com/flashmob/go-guerrilla/response"
 	"github.com/flashmob/go-guerrilla/response"
 )
 )
 
 
@@ -47,7 +46,6 @@ const (
 // Server listens for SMTP clients on the port specified in its config
 // Server listens for SMTP clients on the port specified in its config
 type server struct {
 type server struct {
 	configStore     atomic.Value // stores guerrilla.ServerConfig
 	configStore     atomic.Value // stores guerrilla.ServerConfig
-	backend         backends.Backend
 	tlsConfigStore  atomic.Value
 	tlsConfigStore  atomic.Value
 	timeout         atomic.Value // stores time.Duration
 	timeout         atomic.Value // stores time.Duration
 	listenInterface string
 	listenInterface string
@@ -57,11 +55,11 @@ type server struct {
 	closedListener  chan (bool)
 	closedListener  chan (bool)
 	hosts           allowedHosts // stores map[string]bool for faster lookup
 	hosts           allowedHosts // stores map[string]bool for faster lookup
 	state           int
 	state           int
-	mainlog         log.Logger
-	log             log.Logger
 	// If log changed after a config reload, newLogStore stores the value here until it's safe to change it
 	// If log changed after a config reload, newLogStore stores the value here until it's safe to change it
 	logStore     atomic.Value
 	logStore     atomic.Value
 	mainlogStore atomic.Value
 	mainlogStore atomic.Value
+	backendStore atomic.Value
+	envelopePool *mail.Pool
 }
 }
 
 
 type allowedHosts struct {
 type allowedHosts struct {
@@ -72,27 +70,26 @@ type allowedHosts struct {
 // Creates and returns a new ready-to-run Server from a configuration
 // Creates and returns a new ready-to-run Server from a configuration
 func newServer(sc *ServerConfig, b backends.Backend, l log.Logger) (*server, error) {
 func newServer(sc *ServerConfig, b backends.Backend, l log.Logger) (*server, error) {
 	server := &server{
 	server := &server{
-		backend:         b,
 		clientPool:      NewPool(sc.MaxClients),
 		clientPool:      NewPool(sc.MaxClients),
 		closedListener:  make(chan (bool), 1),
 		closedListener:  make(chan (bool), 1),
 		listenInterface: sc.ListenInterface,
 		listenInterface: sc.ListenInterface,
 		state:           ServerStateNew,
 		state:           ServerStateNew,
-		mainlog:         l,
+		envelopePool:    mail.NewPool(sc.MaxClients),
 	}
 	}
-	var logOpenError error
-	if sc.LogFile == "" {
+	server.logStore.Store(l)
+	server.backendStore.Store(b)
+	logFile := sc.LogFile
+	if logFile == "" {
 		// none set, use the same log file as mainlog
 		// none set, use the same log file as mainlog
-		server.log, logOpenError = log.GetLogger(server.mainlog.GetLogDest())
-	} else {
-		server.log, logOpenError = log.GetLogger(sc.LogFile)
+		logFile = server.mainlog().GetLogDest()
 	}
 	}
+	// set level to same level as mainlog level
+	mainlog, logOpenError := log.GetLogger(logFile, server.mainlog().GetLevel())
+	server.mainlogStore.Store(mainlog)
 	if logOpenError != nil {
 	if logOpenError != nil {
-		server.log.WithError(logOpenError).Errorf("Failed creating a logger for server [%s]", sc.ListenInterface)
+		server.log().WithError(logOpenError).Errorf("Failed creating a logger for server [%s]", sc.ListenInterface)
 	}
 	}
 
 
-	// set to same level
-	server.log.SetLevel(server.mainlog.GetLevel())
-
 	server.setConfig(sc)
 	server.setConfig(sc)
 	server.setTimeout(sc.Timeout)
 	server.setTimeout(sc.Timeout)
 	if err := server.configureSSL(); err != nil {
 	if err := server.configureSSL(); err != nil {
@@ -119,21 +116,17 @@ func (s *server) configureSSL() error {
 	return nil
 	return nil
 }
 }
 
 
-// configureLog checks to see if there is a new logger, so that the server.log can be safely changed
-// this function is not gorotine safe, although it'll read the new value safely
-func (s *server) configureLog() {
-	// when log changed
-	if l, ok := s.logStore.Load().(log.Logger); ok {
-		if l != s.log {
-			s.log = l
-		}
-	}
-	// when mainlog changed
-	if ml, ok := s.mainlogStore.Load().(log.Logger); ok {
-		if ml != s.mainlog {
-			s.mainlog = ml
-		}
+// setBackend sets the backend to use for processing email envelopes
+func (s *server) setBackend(b backends.Backend) {
+	s.backendStore.Store(b)
+}
+
+// backend gets the backend used to process email envelopes
+func (s *server) backend() backends.Backend {
+	if b, ok := s.backendStore.Load().(backends.Backend); ok {
+		return b
 	}
 	}
+	return nil
 }
 }
 
 
 // Set the timeout for the server and all clients
 // Set the timeout for the server and all clients
@@ -177,43 +170,43 @@ func (server *server) Start(startWG *sync.WaitGroup) error {
 		return fmt.Errorf("[%s] Cannot listen on port: %s ", server.listenInterface, err.Error())
 		return fmt.Errorf("[%s] Cannot listen on port: %s ", server.listenInterface, err.Error())
 	}
 	}
 
 
-	server.log.Infof("Listening on TCP %s", server.listenInterface)
+	server.log().Infof("Listening on TCP %s", server.listenInterface)
 	server.state = ServerStateRunning
 	server.state = ServerStateRunning
 	startWG.Done() // start successful, don't wait for me
 	startWG.Done() // start successful, don't wait for me
 
 
 	for {
 	for {
-		server.log.Debugf("[%s] Waiting for a new client. Next Client ID: %d", server.listenInterface, clientID+1)
+		server.log().Debugf("[%s] Waiting for a new client. Next Client ID: %d", server.listenInterface, clientID+1)
 		conn, err := listener.Accept()
 		conn, err := listener.Accept()
-		server.configureLog()
 		clientID++
 		clientID++
 		if err != nil {
 		if err != nil {
 			if e, ok := err.(net.Error); ok && !e.Temporary() {
 			if e, ok := err.(net.Error); ok && !e.Temporary() {
-				server.log.Infof("Server [%s] has stopped accepting new clients", server.listenInterface)
+				server.log().Infof("Server [%s] has stopped accepting new clients", server.listenInterface)
 				// the listener has been closed, wait for clients to exit
 				// the listener has been closed, wait for clients to exit
-				server.log.Infof("shutting down pool [%s]", server.listenInterface)
+				server.log().Infof("shutting down pool [%s]", server.listenInterface)
 				server.clientPool.ShutdownState()
 				server.clientPool.ShutdownState()
 				server.clientPool.ShutdownWait()
 				server.clientPool.ShutdownWait()
 				server.state = ServerStateStopped
 				server.state = ServerStateStopped
 				server.closedListener <- true
 				server.closedListener <- true
 				return nil
 				return nil
 			}
 			}
-			server.mainlog.WithError(err).Info("Temporary error accepting client")
+			server.mainlog().WithError(err).Info("Temporary error accepting client")
 			continue
 			continue
 		}
 		}
 		go func(p Poolable, borrow_err error) {
 		go func(p Poolable, borrow_err error) {
 			c := p.(*client)
 			c := p.(*client)
 			if borrow_err == nil {
 			if borrow_err == nil {
 				server.handleClient(c)
 				server.handleClient(c)
+				server.envelopePool.Return(c.Envelope)
 				server.clientPool.Return(c)
 				server.clientPool.Return(c)
 			} else {
 			} else {
-				server.log.WithError(borrow_err).Info("couldn't borrow a new client")
+				server.log().WithError(borrow_err).Info("couldn't borrow a new client")
 				// we could not get a client, so close the connection.
 				// we could not get a client, so close the connection.
 				conn.Close()
 				conn.Close()
 
 
 			}
 			}
 			// intentionally placed Borrow in args so that it's called in the
 			// intentionally placed Borrow in args so that it's called in the
 			// same main goroutine.
 			// same main goroutine.
-		}(server.clientPool.Borrow(conn, clientID, server.log))
+		}(server.clientPool.Borrow(conn, clientID, server.log(), server.envelopePool))
 
 
 	}
 	}
 }
 }
@@ -284,12 +277,12 @@ func (server *server) isShuttingDown() bool {
 func (server *server) handleClient(client *client) {
 func (server *server) handleClient(client *client) {
 	defer client.closeConn()
 	defer client.closeConn()
 	sc := server.configStore.Load().(ServerConfig)
 	sc := server.configStore.Load().(ServerConfig)
-	server.log.Infof("Handle client [%s], id: %d", client.RemoteAddress, client.ID)
+	server.log().Infof("Handle client [%s], id: %d", client.RemoteIP, client.ID)
 
 
 	// Initial greeting
 	// Initial greeting
-	greeting := fmt.Sprintf("220 %s SMTP Guerrilla(%s) #%d (%d) %s gr:%d",
+	greeting := fmt.Sprintf("220 %s SMTP Guerrilla(%s) #%d (%d) %s",
 		sc.Hostname, Version, client.ID,
 		sc.Hostname, Version, client.ID,
-		server.clientPool.GetActiveClientsCount(), time.Now().Format(time.RFC3339), runtime.NumGoroutine())
+		server.clientPool.GetActiveClientsCount(), time.Now().Format(time.RFC3339))
 
 
 	helo := fmt.Sprintf("250 %s Hello", sc.Hostname)
 	helo := fmt.Sprintf("250 %s Hello", sc.Hostname)
 	// ehlo is a multi-line reply and need additional \r\n at the end
 	// ehlo is a multi-line reply and need additional \r\n at the end
@@ -307,11 +300,11 @@ func (server *server) handleClient(client *client) {
 	if sc.TLSAlwaysOn {
 	if sc.TLSAlwaysOn {
 		tlsConfig, ok := server.tlsConfigStore.Load().(*tls.Config)
 		tlsConfig, ok := server.tlsConfigStore.Load().(*tls.Config)
 		if !ok {
 		if !ok {
-			server.mainlog.Error("Failed to load *tls.Config")
+			server.mainlog().Error("Failed to load *tls.Config")
 		} else if err := client.upgradeToTLS(tlsConfig); err == nil {
 		} else if err := client.upgradeToTLS(tlsConfig); err == nil {
 			advertiseTLS = ""
 			advertiseTLS = ""
 		} else {
 		} else {
-			server.log.WithError(err).Warnf("[%s] Failed TLS handshake", client.RemoteAddress)
+			server.log().WithError(err).Warnf("[%s] Failed TLS handshake", client.RemoteIP)
 			// server requires TLS, but can't handshake
 			// server requires TLS, but can't handshake
 			client.kill()
 			client.kill()
 		}
 		}
@@ -329,19 +322,19 @@ func (server *server) handleClient(client *client) {
 		case ClientCmd:
 		case ClientCmd:
 			client.bufin.setLimit(CommandLineMaxLength)
 			client.bufin.setLimit(CommandLineMaxLength)
 			input, err := server.readCommand(client, sc.MaxSize)
 			input, err := server.readCommand(client, sc.MaxSize)
-			server.log.Debugf("Client sent: %s", input)
+			server.log().Debugf("Client sent: %s", input)
 			if err == io.EOF {
 			if err == io.EOF {
-				server.log.WithError(err).Warnf("Client closed the connection: %s", client.RemoteAddress)
+				server.log().WithError(err).Warnf("Client closed the connection: %s", client.RemoteIP)
 				return
 				return
 			} else if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
 			} else if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
-				server.log.WithError(err).Warnf("Timeout: %s", client.RemoteAddress)
+				server.log().WithError(err).Warnf("Timeout: %s", client.RemoteIP)
 				return
 				return
 			} else if err == LineLimitExceeded {
 			} else if err == LineLimitExceeded {
 				client.sendResponse(response.Canned.FailLineTooLong)
 				client.sendResponse(response.Canned.FailLineTooLong)
 				client.kill()
 				client.kill()
 				break
 				break
 			} else if err != nil {
 			} else if err != nil {
-				server.log.WithError(err).Warnf("Read error: %s", client.RemoteAddress)
+				server.log().WithError(err).Warnf("Read error: %s", client.RemoteIP)
 				client.kill()
 				client.kill()
 				break
 				break
 			}
 			}
@@ -381,21 +374,22 @@ func (server *server) handleClient(client *client) {
 					client.sendResponse(response.Canned.FailNestedMailCmd)
 					client.sendResponse(response.Canned.FailNestedMailCmd)
 					break
 					break
 				}
 				}
-				mail := input[10:]
-				from := envelope.EmailAddress{}
-
-				if !(strings.Index(mail, "<>") == 0) &&
-					!(strings.Index(mail, " <>") == 0) {
+				addr := input[10:]
+				if !(strings.Index(addr, "<>") == 0) &&
+					!(strings.Index(addr, " <>") == 0) {
 					// Not Bounce, extract mail.
 					// Not Bounce, extract mail.
-					from, err = extractEmail(mail)
-				}
+					if from, err := extractEmail(addr); err != nil {
+						client.sendResponse(err)
+						break
+					} else {
+						client.MailFrom = from
+					}
 
 
-				if err != nil {
-					client.sendResponse(err)
 				} else {
 				} else {
-					client.MailFrom = from
-					client.sendResponse(response.Canned.SuccessMailCmd)
+					// bounce has empty from address
+					client.MailFrom = mail.Address{}
 				}
 				}
+				client.sendResponse(response.Canned.SuccessMailCmd)
 
 
 			case strings.Index(cmd, "RCPT TO:") == 0:
 			case strings.Index(cmd, "RCPT TO:") == 0:
 				if len(client.RcptTo) > RFC2821LimitRecipients {
 				if len(client.RcptTo) > RFC2821LimitRecipients {
@@ -409,8 +403,15 @@ func (server *server) handleClient(client *client) {
 					if !server.allowsHost(to.Host) {
 					if !server.allowsHost(to.Host) {
 						client.sendResponse(response.Canned.ErrorRelayDenied, to.Host)
 						client.sendResponse(response.Canned.ErrorRelayDenied, to.Host)
 					} else {
 					} else {
-						client.RcptTo = append(client.RcptTo, to)
-						client.sendResponse(response.Canned.SuccessRcptCmd)
+						client.PushRcpt(to)
+						rcptError := server.backend().ValidateRcpt(client.Envelope)
+						if rcptError != nil {
+							client.PopRcpt()
+							client.sendResponse(response.Canned.FailRcptCmd + " " + rcptError.Error())
+						} else {
+							client.sendResponse(response.Canned.SuccessRcptCmd)
+						}
+
 					}
 					}
 				}
 				}
 
 
@@ -475,11 +476,12 @@ func (server *server) handleClient(client *client) {
 					client.sendResponse(response.Canned.FailReadErrorDataCmd, err.Error())
 					client.sendResponse(response.Canned.FailReadErrorDataCmd, err.Error())
 					client.kill()
 					client.kill()
 				}
 				}
-				server.log.WithError(err).Warn("Error reading data")
+				server.log().WithError(err).Warn("Error reading data")
+				client.resetTransaction()
 				break
 				break
 			}
 			}
 
 
-			res := server.backend.Process(client.Envelope)
+			res := server.backend().Process(client.Envelope)
 			if res.Code() < 300 {
 			if res.Code() < 300 {
 				client.messagesSent++
 				client.messagesSent++
 			}
 			}
@@ -494,12 +496,12 @@ func (server *server) handleClient(client *client) {
 			if !client.TLS && sc.StartTLSOn {
 			if !client.TLS && sc.StartTLSOn {
 				tlsConfig, ok := server.tlsConfigStore.Load().(*tls.Config)
 				tlsConfig, ok := server.tlsConfigStore.Load().(*tls.Config)
 				if !ok {
 				if !ok {
-					server.mainlog.Error("Failed to load *tls.Config")
+					server.mainlog().Error("Failed to load *tls.Config")
 				} else if err := client.upgradeToTLS(tlsConfig); err == nil {
 				} else if err := client.upgradeToTLS(tlsConfig); err == nil {
 					advertiseTLS = ""
 					advertiseTLS = ""
 					client.resetTransaction()
 					client.resetTransaction()
 				} else {
 				} else {
-					server.log.WithError(err).Warnf("[%s] Failed TLS handshake", client.RemoteAddress)
+					server.log().WithError(err).Warnf("[%s] Failed TLS handshake", client.RemoteIP)
 					// Don't disconnect, let the client decide if it wants to continue
 					// Don't disconnect, let the client decide if it wants to continue
 				}
 				}
 			}
 			}
@@ -512,15 +514,37 @@ func (server *server) handleClient(client *client) {
 		}
 		}
 
 
 		if client.bufout.Buffered() > 0 {
 		if client.bufout.Buffered() > 0 {
-			if server.log.IsDebug() {
-				server.log.Debugf("Writing response to client: \n%s", client.response.String())
+			if server.log().IsDebug() {
+				server.log().Debugf("Writing response to client: \n%s", client.response.String())
 			}
 			}
 			err := server.flushResponse(client)
 			err := server.flushResponse(client)
 			if err != nil {
 			if err != nil {
-				server.log.WithError(err).Debug("Error writing response")
+				server.log().WithError(err).Debug("Error writing response")
 				return
 				return
 			}
 			}
 		}
 		}
 
 
 	}
 	}
 }
 }
+
+func (s *server) log() log.Logger {
+	if l, ok := s.logStore.Load().(log.Logger); ok {
+		return l
+	}
+	l, err := log.GetLogger(log.OutputStderr.String(), log.InfoLevel.String())
+	if err == nil {
+		s.logStore.Store(l)
+	}
+	return l
+}
+
+func (s *server) mainlog() log.Logger {
+	if l, ok := s.mainlogStore.Load().(log.Logger); ok {
+		return l
+	}
+	l, err := log.GetLogger(log.OutputStderr.String(), log.InfoLevel.String())
+	if err == nil {
+		s.mainlogStore.Store(l)
+	}
+	return l
+}

+ 7 - 11
server_test.go

@@ -10,6 +10,7 @@ import (
 
 
 	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/backends"
 	"github.com/flashmob/go-guerrilla/log"
 	"github.com/flashmob/go-guerrilla/log"
+	"github.com/flashmob/go-guerrilla/mail"
 	"github.com/flashmob/go-guerrilla/mocks"
 	"github.com/flashmob/go-guerrilla/mocks"
 )
 )
 
 
@@ -37,11 +38,13 @@ func getMockServerConfig() *ServerConfig {
 func getMockServerConn(sc *ServerConfig, t *testing.T) (*mocks.Conn, *server) {
 func getMockServerConn(sc *ServerConfig, t *testing.T) (*mocks.Conn, *server) {
 	var logOpenError error
 	var logOpenError error
 	var mainlog log.Logger
 	var mainlog log.Logger
-	mainlog, logOpenError = log.GetLogger(sc.LogFile)
+	mainlog, logOpenError = log.GetLogger(sc.LogFile, "debug")
 	if logOpenError != nil {
 	if logOpenError != nil {
 		mainlog.WithError(logOpenError).Errorf("Failed creating a logger for mock conn [%s]", sc.ListenInterface)
 		mainlog.WithError(logOpenError).Errorf("Failed creating a logger for mock conn [%s]", sc.ListenInterface)
 	}
 	}
-	backend, err := backends.New("dummy", backends.BackendConfig{"log_received_mails": true}, mainlog)
+	backend, err := backends.New(
+		backends.BackendConfig{"log_received_mails": true, "save_workers_size": 1},
+		mainlog)
 	if err != nil {
 	if err != nil {
 		t.Error("new dummy backend failed because:", err)
 		t.Error("new dummy backend failed because:", err)
 	}
 	}
@@ -59,13 +62,13 @@ func TestHandleClient(t *testing.T) {
 	var mainlog log.Logger
 	var mainlog log.Logger
 	var logOpenError error
 	var logOpenError error
 	sc := getMockServerConfig()
 	sc := getMockServerConfig()
-	mainlog, logOpenError = log.GetLogger(sc.LogFile)
+	mainlog, logOpenError = log.GetLogger(sc.LogFile, "debug")
 	if logOpenError != nil {
 	if logOpenError != nil {
 		mainlog.WithError(logOpenError).Errorf("Failed creating a logger for mock conn [%s]", sc.ListenInterface)
 		mainlog.WithError(logOpenError).Errorf("Failed creating a logger for mock conn [%s]", sc.ListenInterface)
 	}
 	}
 	conn, server := getMockServerConn(sc, t)
 	conn, server := getMockServerConn(sc, t)
 	// call the serve.handleClient() func in a goroutine.
 	// call the serve.handleClient() func in a goroutine.
-	client := NewClient(conn.Server, 1, mainlog)
+	client := NewClient(conn.Server, 1, mainlog, mail.NewPool(5))
 	var wg sync.WaitGroup
 	var wg sync.WaitGroup
 	wg.Add(1)
 	wg.Add(1)
 	go func() {
 	go func() {
@@ -92,10 +95,3 @@ func TestHandleClient(t *testing.T) {
 
 
 // TODO
 // TODO
 // - test github issue #44 and #42
 // - test github issue #44 and #42
-// - test other commands
-
-// also, could test
-// - test allowsHost() and allowsHost()
-// - test isInTransaction() (make sure it returns true after MAIL command, but false after HELO/EHLO/RSET/end of DATA
-// - test to make sure client envelope
-// - perhaps anything else that can be tested in server_test.go

+ 13 - 8
tests/guerrilla_test.go

@@ -62,8 +62,8 @@ func init() {
 		initErr = errors.New("Could not Unmarshal config," + err.Error())
 		initErr = errors.New("Could not Unmarshal config," + err.Error())
 	} else {
 	} else {
 		setupCerts(config)
 		setupCerts(config)
-		logger, _ = log.GetLogger(config.LogFile)
-		backend, _ := getBackend("dummy", config.BackendConfig, logger)
+		logger, _ = log.GetLogger(config.LogFile, "debug")
+		backend, _ := getBackend(config.BackendConfig, logger)
 		app, _ = guerrilla.New(&config.AppConfig, backend, logger)
 		app, _ = guerrilla.New(&config.AppConfig, backend, logger)
 	}
 	}
 
 
@@ -74,9 +74,8 @@ var configJson = `
 {
 {
     "log_file" : "./testlog",
     "log_file" : "./testlog",
     "log_level" : "debug",
     "log_level" : "debug",
-    "pid_file" : "/var/run/go-guerrilla.pid",
+    "pid_file" : "go-guerrilla.pid",
     "allowed_hosts": ["spam4.me","grr.la"],
     "allowed_hosts": ["spam4.me","grr.la"],
-    "backend_name" : "dummy",
     "backend_config" :
     "backend_config" :
         {
         {
             "log_received_mails" : true
             "log_received_mails" : true
@@ -113,8 +112,13 @@ var configJson = `
 }
 }
 `
 `
 
 
-func getBackend(backendName string, backendConfig map[string]interface{}, l log.Logger) (backends.Backend, error) {
-	return backends.New(backendName, backendConfig, l)
+func getBackend(backendConfig map[string]interface{}, l log.Logger) (backends.Backend, error) {
+	b, err := backends.New(backendConfig, l)
+	if err != nil {
+		fmt.Println("backend init error", err)
+		os.Exit(1)
+	}
+	return b, err
 }
 }
 
 
 func setupCerts(c *TestConfig) {
 func setupCerts(c *TestConfig) {
@@ -188,7 +192,6 @@ func TestGreeting(t *testing.T) {
 		t.FailNow()
 		t.FailNow()
 	}
 	}
 	if startErrors := app.Start(); startErrors == nil {
 	if startErrors := app.Start(); startErrors == nil {
-
 		// 1. plaintext connection
 		// 1. plaintext connection
 		conn, err := net.Dial("tcp", config.Servers[0].ListenInterface)
 		conn, err := net.Dial("tcp", config.Servers[0].ListenInterface)
 		if err != nil {
 		if err != nil {
@@ -236,6 +239,7 @@ func TestGreeting(t *testing.T) {
 		conn.Close()
 		conn.Close()
 
 
 	} else {
 	} else {
+		fmt.Println("Nope", startErrors)
 		if startErrors := app.Start(); startErrors != nil {
 		if startErrors := app.Start(); startErrors != nil {
 			t.Error(startErrors)
 			t.Error(startErrors)
 			t.FailNow()
 			t.FailNow()
@@ -332,6 +336,7 @@ func TestRFC2821LimitRecipients(t *testing.T) {
 			}
 			}
 
 
 			for i := 0; i < 101; i++ {
 			for i := 0; i < 101; i++ {
+				//fmt.Println(fmt.Sprintf("RCPT TO:test%[email protected]", i))
 				if _, err := Command(conn, bufin, fmt.Sprintf("RCPT TO:test%[email protected]", i)); err != nil {
 				if _, err := Command(conn, bufin, fmt.Sprintf("RCPT TO:test%[email protected]", i)); err != nil {
 					t.Error("RCPT TO", err.Error())
 					t.Error("RCPT TO", err.Error())
 					break
 					break
@@ -1096,7 +1101,7 @@ func TestDataCommand(t *testing.T) {
 				bufin,
 				bufin,
 				email+"\r\n.\r\n")
 				email+"\r\n.\r\n")
 			//expected := "500 Line too long"
 			//expected := "500 Line too long"
-			expected := "250 2.0.0 OK : queued as s0m3l337Ha5hva1u3LOL"
+			expected := "250 2.0.0 OK : queued as "
 			if strings.Index(response, expected) != 0 {
 			if strings.Index(response, expected) != 0 {
 				t.Error("Server did not respond with", expected, ", it said:"+response, err)
 				t.Error("Server did not respond with", expected, ", it said:"+response, err)
 			}
 			}

+ 4 - 4
util.go

@@ -5,14 +5,14 @@ import (
 	"regexp"
 	"regexp"
 	"strings"
 	"strings"
 
 
-	"github.com/flashmob/go-guerrilla/envelope"
+	"github.com/flashmob/go-guerrilla/mail"
 	"github.com/flashmob/go-guerrilla/response"
 	"github.com/flashmob/go-guerrilla/response"
 )
 )
 
 
 var extractEmailRegex, _ = regexp.Compile(`<(.+?)@(.+?)>`) // go home regex, you're drunk!
 var extractEmailRegex, _ = regexp.Compile(`<(.+?)@(.+?)>`) // go home regex, you're drunk!
 
 
-func extractEmail(str string) (envelope.EmailAddress, error) {
-	email := envelope.EmailAddress{}
+func extractEmail(str string) (mail.Address, error) {
+	email := mail.Address{}
 	var err error
 	var err error
 	if len(str) > RFC2821LimitPath {
 	if len(str) > RFC2821LimitPath {
 		return email, errors.New(response.Canned.FailPathTooLong)
 		return email, errors.New(response.Canned.FailPathTooLong)
@@ -21,7 +21,7 @@ func extractEmail(str string) (envelope.EmailAddress, error) {
 		email.User = matched[1]
 		email.User = matched[1]
 		email.Host = validHost(matched[2])
 		email.Host = validHost(matched[2])
 	} else if res := strings.Split(str, "@"); len(res) > 1 {
 	} else if res := strings.Split(str, "@"); len(res) > 1 {
-		email.User = res[0]
+		email.User = strings.TrimSpace(res[0])
 		email.Host = validHost(res[1])
 		email.Host = validHost(res[1])
 	}
 	}
 	err = nil
 	err = nil