Mirror of the ArchivalBox software source repository. Useful for storing snapshots of web pages and their resources for offline viewing or preservation.
#archival #utilities #python #webpage #web-archival #web-snapshot #storage
|
|
пре 1 година | |
|---|---|---|
| .github | пре 1 година | |
| archivebox | пре 1 година | |
| bin | пре 1 година | |
| brew_dist @ ec64946796 | пре 2 година | |
| deb_dist @ 92f8fe8f34 | пре 2 година | |
| docker @ 1fc6dd7f0f | пре 2 година | |
| docs @ a1b69c51ba | пре 2 година | |
| etc | пре 2 година | |
| pip_dist @ 5323fc773d | пре 2 година | |
| tests | пре 2 година | |
| website | пре 2 година | |
| .dockerignore | пре 1 година | |
| .gitignore | пре 2 година | |
| .gitmodules | пре 4 година | |
| .npmignore | пре 5 година | |
| .readthedocs.yaml | пре 2 година | |
| Dockerfile | пре 1 година | |
| LICENSE | пре 5 година | |
| README.md | пре 1 година | |
| SECURITY.md | пре 2 година | |
| docker-compose.yml | пре 1 година | |
| package-lock.json | пре 1 година | |
| package.json | пре 1 година | |
| pdm.lock | пре 1 година | |
| pyproject.toml | пре 1 година | |
| requirements.txt | пре 1 година |
ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.
Without active preservation effort, everything on the internet eventually dissapears or degrades. Archive.org does a great job as a free central archive, but they require all archives to be public, and they can't save every type of content.
ArchiveBox is an open source tool that helps you archive web content on your own (or privately within an organization): save copies of browser bookmarks, preserve evidence for legal cases, backup photos from FB / Insta / Flickr, download your media from YT / Soundcloud / etc., snapshot research papers & academic citations, and more...
➡️ Use ArchiveBox as a command-line package and/or self-hosted web app on Linux, macOS, or in Docker.
📥 You can feed ArchiveBox URLs one at a time, or schedule regular imports from browser bookmarks or history, feeds like RSS, bookmark services like Pocket/Pinboard, and more. See input formats for a full list.
💾 It saves snapshots of the URLs you feed it in several redundant formats.
It also detects any content featured inside each webpage & extracts it out into a folder:
HTML/Generic websites -> HTML, PDF, PNG, WARC, SinglefileYouTube/SoundCloud/etc. -> MP3/MP4 + subtitles, description, thumbnailNews articles -> article body TXT + title, author, featured imagesGithub/Gitlab/etc. links -> git cloned source codeIt uses normal filesystem folders to organize archives (no complicated proprietary formats), and offers a CLI + web UI.
🏛️ ArchiveBox is used by many professionals and hobbyists who save content off the web, for example:
backing up browser bookmarks/history, saving FB/Insta/etc. content, shopping listscrawling and collecting research, preserving quoted material, fact-checking and reviewevidence collection, hashing & integrity verifying, search, tagging, & reviewcollecting AI training sets, feeding analysis / web crawling pipelinesThe goal is to sleep soundly knowing the part of the internet you care about will be automatically preserved in durable, easily accessible formats for decades after it goes down.
📦 Get ArchiveBox with docker / apt / brew / pip3 / nix / etc. (see Quickstart below).
# Get ArchiveBox with Docker or Docker Compose (recommended)
docker run -v $PWD/data:/data -it archivebox/archivebox:dev init --setup
# Or install with your preferred package manager (see Quickstart below for apt, brew, and more)
pip3 install archivebox
# Or use the optional auto setup script to install it
curl -sSL 'https://get.archivebox.io' | sh
🔢 Example usage: adding links to archive.
archivebox add 'https://example.com' # add URLs one at a time
archivebox add < ~/Downloads/bookmarks.json # or pipe in URLs in any text-based format
archivebox schedule --every=day --depth=1 https://example.com/rss.xml # or auto-import URLs regularly on a schedule
🔢 Example usage: viewing the archived content.
archivebox server 0.0.0.0:8000 # use the interactive web UI
archivebox list 'https://example.com' # use the CLI commands (--help for more)
ls ./archive/*/index.json # or browse directly via the filesystem
Contact us if your non-profit institution/org wants to use ArchiveBox professionally.
All our work is open-source and primarily geared towards non-profits.
Support/consulting pays for hosting and funds new ArchiveBox open-source development.
🖥 Supported OSs: Linux/BSD, macOS, Windows (Docker) 👾 CPUs: amd64 (x86_64), arm64 (arm8), arm7 (raspi>=3)
Note: On arm7 the playwright package is not available, so chromium must be installed manually if needed.
docker-compose (macOS/Linux/Windows) 👈 recommended (click to expand)docker-compose.yml file into a new empty directory (can be anywhere).
mkdir ~/archivebox && cd ~/archivebox
curl -O 'https://raw.githubusercontent.com/ArchiveBox/ArchiveBox/dev/docker-compose.yml'
docker compose run archivebox init --setup
docker compose up
# completely optional, CLI can always be used without running a server
# docker compose run [-T] archivebox [subcommand] [--args]
docker run (macOS/Linux/Windows)mkdir ~/archivebox && cd ~/archivebox
docker run -v $PWD:/data -it archivebox/archivebox init --setup
docker run -v $PWD:/data -p 8000:8000 archivebox/archivebox
# completely optional, CLI can always be used without running a server
# docker run -v $PWD:/data -it [subcommand] [--args]
bash auto-setup script (macOS/Linux)curl -sSL 'https://get.archivebox.io' | sh
setup.sh for the source code of the auto-install script.
apt (Ubuntu/Debian)
echo "deb http://ppa.launchpad.net/archivebox/archivebox/ubuntu focal main" | sudo tee /etc/apt/sources.list.d/archivebox.list
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C258F79DCC02E369
sudo apt update
apt.
sudo apt install archivebox
sudo python3 -m pip install --upgrade --ignore-installed archivebox # pip needed because apt only provides a broken older version of Django
Note: If you encounter issues with NPM/NodeJS, install a more recent version.mkdir ~/archivebox && cd ~/archivebox
archivebox init --setup # if any problems, install with pip instead
archivebox server 0.0.0.0:8000
See below for more usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.
See the debian-archivebox repo for more details about this distribution.
brew (macOS)brew.
brew tap archivebox/archivebox
brew install archivebox
mkdir ~/archivebox && cd ~/archivebox
archivebox init --setup # if any problems, install with pip instead
archivebox server 0.0.0.0:8000
# completely optional, CLI can always be used without running a server
# archivebox [subcommand] [--args]
homebrew-archivebox repo for more details about this distribution.
pip (macOS/Linux/BSD)pip3.
pip3 install archivebox
mkdir ~/archivebox && cd ~/archivebox
archivebox init --setup
# install any missing extras like wget/git/ripgrep/etc. manually as needed
archivebox server 0.0.0.0:8000
# completely optional, CLI can always be used without running a server
# archivebox [subcommand] [--args]
pip-archivebox repo for more details about this distribution.
pacman /
pkg /
nix (Arch/FreeBSD/NixOS/more)yay -S archivebox (contributed by @imlonghao)curl -sSL 'https://get.archivebox.io' | sh (uses pkg + pip3 under-the-hood)nix-env --install archivebox (contributed by @siraben)
docker +
electron Desktop App (macOS/Linux/Windows)ArchiveBox.app.zipArchiveBox.deb (alpha: build manually)ArchiveBox.exe (beta: build manually)
Paid hosting solutions (cloud VPS)# archivebox [subcommand] [--args]
# docker-compose run archivebox [subcommand] [--args]
# docker run -v $PWD:/data -it [subcommand] [--args]
archivebox init --setup # safe to run init multiple times (also how you update versions)
archivebox --version
archivebox help
archivebox setup/init/config/status/manage to administer your collectionarchivebox add/schedule/remove/update/list/shell/oneshot to manage Snapshots in the archivearchivebox schedule to pull in fresh URLs regularly from bookmarks/history/Pocket/Pinboard/RSS/etc.archivebox manage createsuperuser # set an admin password
archivebox server 0.0.0.0:8000 # open http://127.0.0.1:8000 to view it
# you can also configure whether or not login is required for most features
archivebox config --set PUBLIC_INDEX=False
archivebox config --set PUBLIC_SNAPSHOTS=False
archivebox config --set PUBLIC_ADD_VIEW=False
sqlite3 ./index.sqlite3 # run SQL queries on your index
archivebox shell # explore the Python API in a REPL
ls ./archive/*/index.html # or inspect snapshots on the filesystem
https://demo.archivebox.ioArchiveBox supports many input formats for URLs, including Pocket & Pinboard exports, Browser bookmarks, Browser history, plain text, HTML, markdown, and more!
Click these links for instructions on how to prepare your links from these sources:
archivebox-exporter (realtime archiving from Chrome/Chromium/Firefox)# archivebox add --help
archivebox add 'https://example.com/some/page'
archivebox add < ~/Downloads/firefox_bookmarks_export.html
archivebox add --depth=1 'https://news.ycombinator.com#2020-12-12'
echo 'http://example.com' | archivebox add
echo 'any_text_with [urls](https://example.com) in it' | archivebox add
# if using Docker, add -i when piping stdin:
# echo 'https://example.com' | docker run -v $PWD:/data -i archivebox/archivebox add
# if using Docker Compose, add -T when piping stdin / stdout:
# echo 'https://example.com' | docker compose run -T archivebox add
See the Usage: CLI page for documentation and examples.
It also includes a built-in scheduled import feature with archivebox schedule and browser bookmarklet, so you can pull in URLs from RSS feeds, websites, or the filesystem regularly/on-demand.
Inside each Snapshot folder, ArchiveBox saves these different types of extractor outputs as plain files:
./archive/<timestamp>/*
index.html & index.json HTML and JSON index files containing metadata and detailssinglefile.html HTML snapshot rendered with headless Chrome using SingleFileexample.com/page-name.html wget clone of the site with warc/<timestamp>.gzoutput.pdf Printed PDF of site using headless chromescreenshot.png 1440x900 screenshot of site using headless chromeoutput.html DOM Dump of the HTML after rendering using headless chromearticle.html/json Article text extraction using Readability & Mercuryarchive.org.txt A link to the saved site on archive.orgmedia/ all audio/video files + playlists, including subtitles & metadata with youtube-dl (or yt-dlp)git/ clone of any repository found on GitHub, Bitbucket, or GitLab linksIt does everything out-of-the-box by default, but you can disable or tweak individual archive methods via environment variables / config.
ArchiveBox can be configured via environment variables, by using the archivebox config CLI, or by editing the ArchiveBox.conf config file directly.
archivebox config # view the entire config
archivebox config --get CHROME_BINARY # view a specific value
archivebox config --set CHROME_BINARY=chromium # persist a config using CLI
# OR
echo CHROME_BINARY=chromium >> ArchiveBox.conf # persist a config using file
# OR
env CHROME_BINARY=chromium archivebox ... # run with a one-off config
These methods also work the same way when run inside Docker, see the Docker Configuration wiki page for details.
The config loading logic with all the options defined is here: archivebox/config.py.
Most options are also documented on the Configuration Wiki page.
# e.g. archivebox config --set TIMEOUT=120
TIMEOUT=120 # default: 60 add more seconds on slower networks
CHECK_SSL_VALIDITY=True # default: False True = allow saving URLs w/ bad SSL
SAVE_ARCHIVE_DOT_ORG=False # default: True False = disable Archive.org saving
MAX_MEDIA_SIZE=1500m # default: 750m raise/lower youtubedl output size
PUBLIC_INDEX=True # default: True whether anon users can view index
PUBLIC_SNAPSHOTS=True # default: True whether anon users can view pages
PUBLIC_ADD_VIEW=False # default: False whether anon users can add new URLs
CHROME_USER_AGENT="Mozilla/5.0 ..." # change these to get around bot blocking
WGET_USER_AGENT="Mozilla/5.0 ..."
CURL_USER_AGENT="Mozilla/5.0 ..."
To achieve high-fidelity archives in as many situations as possible, ArchiveBox depends on a variety of high-quality 3rd-party tools and libraries that specialize in extracting different types of content.
Expand to learn more about ArchiveBox's dependencies...
For better security, easier updating, and to avoid polluting your host system with extra dependencies, it is strongly recommended to use the official Docker image with everything pre-installed for the best experience.
These optional dependencies used for archiving sites include:
chromium / chrome (for screenshots, PDF, DOM HTML, and headless JS scripts)node & npm (for readability, mercury, and singlefile)wget (for plain HTML, static files, and WARC saving)curl (for fetching headers, favicon, and posting to Archive.org)youtube-dl or yt-dlp (for audio, video, and subtitles)git (for cloning git repos)You don't need to install every dependency to use ArchiveBox. ArchiveBox will automatically disable extractors that rely on dependencies that aren't installed, based on what is configured and available in your $PATH.
If not using Docker, make sure to keep the dependencies up-to-date yourself and check that ArchiveBox isn't reporting any incompatibility with the versions you install.
# install python3 and archivebox with your system package manager
# apt/brew/pip/etc install ... (see Quickstart instructions above)
archivebox setup # auto install all the extractors and extras
archivebox --version # see info and check validity of installed dependencies
Installing directly on Windows without Docker or WSL/WSL2/Cygwin is not officially supported (I cannot respond to Windows support tickets), but some advanced users have reported getting it working.
All of ArchiveBox's state (including the SQLite DB, archived assets, config, logs, etc.) is stored in a single folder called the "ArchiveBox Data Folder".
Data folders can be created anywhere (~/archivebox or $PWD/data as seen in our examples), and you can create more than one for different collections.
Expand to learn more about the layout of Archivebox's data on-disk...
All archivebox CLI commands are designed to be run from inside an ArchiveBox data folder, starting with archivebox init to initialize a new collection inside an empty directory.
mkdir ~/archivebox && cd ~/archivebox # just an example, can be anywhere
archivebox init
The on-disk layout is optimized to be easy to browse by hand and durable long-term. The main index is a standard index.sqlite3 database in the root of the data folder (it can also be exported as static JSON/HTML), and the archive snapshots are organized by date-added timestamp in the ./archive/ subfolder.
/data/
index.sqlite3
ArchiveBox.conf
archive/
...
1617687755/
index.html
index.json
screenshot.png
media/some_video.mp4
warc/1617687755.warc.gz
git/somerepo.git
...
Each snapshot subfolder ./archive/<timestamp>/ includes a static index.json and index.html describing its contents, and the snapshot extractor outputs are plain files within the folder.
You can export the main index to browse it statically as plain HTML files in a folder (without needing to run a server).
Expand to learn how to export your ArchiveBox collection...
Note These exports are not paginated, exporting many URLs or the entire archive at once may be slow. Use the filtering CLI flags on the
archivebox listcommand to export specific Snapshots or ranges.
# archivebox list --help
archivebox list --html --with-headers > index.html # export to static html table
archivebox list --json --with-headers > index.json # export to json blob
archivebox list --csv=timestamp,url,title > index.csv # export to csv spreadsheet
# (if using Docker Compose, add the -T flag when piping)
# docker compose run -T archivebox list --html --filter-type=search snozzberries > index.json
The paths in the static exports are relative, make sure to keep them next to your ./archive folder when backing them up or viewing them.
If you're importing pages with private content or URLs containing secret tokens you don't want public (e.g Google Docs, paywalled content, unlisted videos, etc.), you may want to disable some of the extractor methods to avoid leaking that content to 3rd party APIs or the public.
Click to expand...
# don't save private content to ArchiveBox, e.g.:
archivebox add 'https://docs.google.com/document/d/12345somePrivateDocument'
archivebox add 'https://vimeo.com/somePrivateVideo'
# without first disabling saving to Archive.org:
archivebox config --set SAVE_ARCHIVE_DOT_ORG=False # disable saving all URLs in Archive.org
# restrict the main index, Snapshot content, and Add Page to authenticated users as-needed:
archivebox config --set PUBLIC_INDEX=False
archivebox config --set PUBLIC_SNAPSHOTS=False
archivebox config --set PUBLIC_ADD_VIEW=False
# if extra paranoid or anti-Google:
archivebox config --set SAVE_FAVICON=False # disable favicon fetching (it calls a Google API passing the URL's domain part only)
archivebox config --set CHROME_BINARY=chromium # ensure it's using Chromium instead of Chrome
Be aware that malicious archived JS can access the contents of other pages in your archive when viewed. Because the Web UI serves all viewed snapshots from a single domain, they share a request context and typical CSRF/CORS/XSS/CSP protections do not work to prevent cross-site request attacks. See the Security Overview page and Issue #239 for more details.
Click to expand...
# visiting an archived page with malicious JS:
https://127.0.0.1:8000/archive/1602401954/example.com/index.html
# example.com/index.js can now make a request to read everything from:
https://127.0.0.1:8000/index.html
https://127.0.0.1:8000/archive/*
# then example.com/index.js can send it off to some evil server
The admin UI is also served from the same origin as replayed JS, so malicious pages could also potentially use your ArchiveBox login cookies to perform admin actions (e.g. adding/removing links, running extractors, etc.). We are planning to fix this security shortcoming in a future version by using separate ports/origins to serve the Admin UI and archived content (see Issue #239).
Note: Only the wget & dom extractor methods execute archived JS when viewing snapshots, all other archive methods produce static output that does not execute JS on viewing. If you are worried about these issues ^ you should disable these extractors using archivebox config --set SAVE_WGET=False SAVE_DOM=False.
CVE-2023-45815)For various reasons, many large sites (Reddit, Twitter, Cloudflare, etc.) actively block archiving or bots in general. There are a number of approaches to work around this.
Click to expand...
CHROME_USER_AGENT, WGET_USER_AGENT, CURL_USER_AGENT to impersonate a real browser (instead of an ArchiveBox bot)CHROME_DATA_DIR & COOKIES_FILEreddit.com/some/url -> teddit.net/some/url: https://github.com/mendel5/alternative-front-endsIn the future we plan on adding support for running JS scripts during archiving to block ads, cookie popups, modals, and fix other issues. Follow here for progress: Issue #51.
ArchiveBox appends a hash with the current date https://example.com#2020-10-24 to differentiate when a single URL is archived multiple times.
Click to expand...
Because ArchiveBox uniquely identifies snapshots by URL, it must use a workaround to take multiple snapshots of the same URL (otherwise they would show up as a single Snapshot entry). It makes the URLs of repeated snapshots unique by adding a hash with the archive date at the end:
archivebox add 'https://example.com#2020-10-24'
...
archivebox add 'https://example.com#2020-10-25'
The
button in the Admin UI is a shortcut for this hash-date multi-snapshotting workaround.
Improved support for saving multiple snapshots of a single URL without this hash-date workaround will be added eventually (along with the ability to view diffs of the changes between runs).
Because ArchiveBox is designed to ingest a large volume of URLs with multiple copies of each URL stored by different 3rd-party tools, it can be quite disk-space intensive.
There also also some special requirements when using filesystems like NFS/SMB/FUSE.
Click to expand...
ArchiveBox can use anywhere from ~1gb per 1000 articles, to ~50gb per 1000 articles, mostly dependent on whether you're saving audio & video using SAVE_MEDIA=True and whether you lower MEDIA_MAX_SIZE=750mb.
Disk usage can be reduced by using a compressed/deduplicated filesystem like ZFS/BTRFS, or by turning off extractors methods you don't need. You can also deduplicate content with a tool like fdupes or rdfind. Don't store large collections on older filesystems like EXT3/FAT as they may not be able to handle more than 50k directory entries in the archive/ folder. Try to keep the index.sqlite3 file on local drive (not a network mount) or SSD for maximum performance, however the archive/ folder can be on a network mount or slower HDD.
If using Docker or NFS/SMB/FUSE for the data/archive/ folder, you may need to set PUID & PGID and disable root_squash on your fileshare server.
ArchiveBox aims to enable more of the internet to be saved from deterioration by empowering people to self-host their own archives. The intent is for all the web content you care about to be viewable with common software in 50 - 100 years without needing to run ArchiveBox or other specialized software to replay it.
Click to read more...
Vast treasure troves of knowledge are lost every day on the internet to link rot. As a society, we have an imperative to preserve some important parts of that treasure, just like we preserve our books, paintings, and music in physical libraries long after the originals go out of print or fade into obscurity.
Whether it's to resist censorship by saving articles before they get taken down or edited, or just to save a collection of early 2010's flash games you love to play, having the tools to archive internet content enables to you save the stuff you care most about before it disappears.
The balance between the permanence and ephemeral nature of content on the internet is part of what makes it beautiful. I don't think everything should be preserved in an automated fashion--making all content permanent and never removable, but I do think people should be able to decide for themselves and effectively archive specific content that they care about.
Because modern websites are complicated and often rely on dynamic content, ArchiveBox archives the sites in several different formats beyond what public archiving services like Archive.org/Archive.is save. Using multiple methods and the market-dominant browser to execute JS ensures we can save even the most complex, finicky websites in at least a few high-quality, long-term data formats.
[!TIP] Check out our community page for an index of web archiving initiatives and projects.
A variety of open and closed-source archiving projects exist, but few provide a nice UI and CLI to manage a large, high-fidelity archive collection over time.
ArchiveBox tries to be a robust, set-and-forget archiving solution suitable for archiving RSS feeds, bookmarks, or your entire browsing history (beware, it may be too big to store), including private/authenticated content that you wouldn't otherwise share with a centralized service (this is not recommended due to JS replay security concerns).
Not all content is suitable to be archived in a centralized collection, whether because it's private, copyrighted, too large, or too complex. ArchiveBox hopes to fill that gap.
By having each user store their own content locally, we can save much larger portions of everyone's browsing history than a shared centralized service would be able to handle. The eventual goal is to work towards federated archiving where users can share portions of their collections with each other.
ArchiveBox differentiates itself from similar self-hosted projects by providing both a comprehensive CLI interface for managing your archive, a Web UI that can be used either independently or together with the CLI, and a simple on-disk data format that can be used without either.
Whether you want to learn which organizations are the big players in the web archiving space, want to find a specific open-source tool for your web archiving need, or just want to see where archivists hang out online, our Community Wiki page serves as an index of the broader web archiving community. Check it out to learn about some of the coolest web archiving projects and communities on the web!
Need help building a custom archiving solution?
✨ Hire the team that built Archivebox to work on your project. (@ArchiveBoxApp)
(We also offer general software consulting across many industries)

We use the GitHub wiki system and Read the Docs (WIP) for documentation.
You can also access the docs locally by looking in the ArchiveBox/docs/ folder.
All contributions to ArchiveBox are welcomed! Check our issues and Roadmap for things to work on, and please open an issue to discuss your proposed implementation before working on things! Otherwise we may have to close your PR if it doesn't align with our roadmap.
For low hanging fruit / easy first tickets, see: ArchiveBox/Issues #good first ticket #help wanted.
Python API Documentation: https://docs.archivebox.io/en/dev/archivebox.html#module-archivebox.main
See the ./bin/ folder and read the source of the bash scripts within.
You can also run all these in Docker. For more examples see the GitHub Actions CI/CD tests that are run: .github/workflows/*.yaml.