This guide will show you how to use waybackpack, a handy little command line utility, which allows you to automatically download all wayback machine archives for any given URL.
Aside from the curiosity of seeing how websites change over time, this is especially good for research and cyber sleuthing, as it can help you see whether pages have been altered.
Everything in this guide was tested on a debian system. I'm not sure if others work, so do your own research.
Step 1. Go to the waybackpack github page (https://github.com/jsvine/waybackpack), download the zip file and extract it into your home folder
Step 2. Change directory to the newly extracted folder:cd waybackpack-master
Step 3. Install waybackpack by typing the following. It's worth noting that you'll need pip installed if you don't have it already:sudo pip install waybackpack
Step 1. Make a new directory for whatever URL you wish to back upmkdir node_history
Step 2. Type waybackpack, the URL you want to look at, followed by the -d option and then the path to the new directory you just created. In this example we'll download all the archives of the NODE homepagewaybackpack n-o-d-e.net -d ~/node_history
Step 3. Now you can check the folder and browse through the html files to see how things have changed over time. One thing to note is that it doesn't currently download images or linked files, so if they're not present on the original server, those links will be broken. From what I've read, it will be coming in an update.
Check out the readme file in the waybackpack directory for extra options, like adding time boundaries and more.
So that's the waybackpack utility, simple and pretty handy.
For example, this type of tool is one of the ways people can help dispute false claims, like those who pretend to be Satoshi Nakamoto, because the snapshots on archive.orgs wayback machine can both disprove timelines and reveal evidence of information tampering.