Reddit’s API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.
The key point: This doesn’t touch Reddit’s servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.
What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.
API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.
Self-hosting options:
- USB drive / local folder (just open the HTML files)
- Home server on your LAN
- Tor hidden service (2 commands, no port forwarding needed)
- VPS with HTTPS
- GitHub Pages for small archives
Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.
Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.
How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is “trust but verify” – it accelerates the boring parts but you still own the architecture.
Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://github.com/19-84/redd-archiver (Public Domain)
Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4
It would be neat for someone to migrate this data set to a Lemmy instance
Now this is a good idea.
I dont know if historic data is very interesting. Its the new content we are interested in…
so kinda like kiwix but for reddit. That is so cool
You should be very proud of this project!! Thank you for sharing.
This seems especially handy for anyone who wants a snapshot of Reddit from pre-enshittification and AI era, where content was more authentic and less driven by bots and commercial manipulation of opinion. Just choose the cutoff date you want and stick with that dataset.
What is the timing of the dataset, up through which date in time?
however the data from 2025-12 has been released already, it just needs to be split and reprocessed for 2025 by watchful1. once that happens then you can host archive up till end of 2025. i will probably add support for importing data from the arctic shift dumps instead so that archives can be updated monthly.
It’s literally says in the link. Go to the link and it’s the title.
Just so you’re aware, it is very noticeable that you also used AI to help write this post and its use of language can throw a lot of people off.
Not to detract from your project, which looks cool!
Yes I used AI, English is not my first language. Thank you for the kind words!
You’re awesome. AI is fun and there’s nothing wrong with using it especially how you did. Lemmy was hit hard with AI hate propaganda. China probably trying to stop it’s growth and development in other countries or some stupid shit like that. But you’re good. Fuck them
Removed by mod
Would love to see you learn an entire foreign language just so you are able to communicate with the world without being laughed at by people as hostile as yourself.
I can’t even learn my own language!
Brush, you do not seem like a nice person to be around.
Spread love and kindness, not hate.
I hope you have a better rest of your day.
Shut the fuck up loser.
Yu mussi bawn backacow
I fucking hate lemmy sometimes.
Fuck Reddit and Fuck Spez.
You know what would be a good way to do t? Take all that content and throw it on a federated service like ours. Publicly visible. No bullshit. And no reason to visit Reddit to get that content. Take their traffic away.
Where would it be hosted so that Conde Nast lawyers can’t touch it?
PLEASE SHARE ON REDDIT!!! I have never had a reddit account and they will NOT let me post about this!!
We can’t share this on Reddit, but we can share this on other platforms. Basically, what you have done is you scraped tons of data for AI learning. Something like “create your own AI Redditor” . And greedy Reddit management will dislike it very much even if you will tell them that this is for the cultural inheritance. Your work is great anyway. Sadly, that I do not have enough free space to load and store all this data.
Anyone doing this will be banned in that platform.
How does this compare to redarc? It seems to be similar.
redarc uses reactjs to serve the web app, redd-archiver uses a hybrid architecture that combines static page generation with postgres search via flask. is more like a hybrid static site generator with web app capabilities through docker and flask. the static pages with sorted indexes can be viewed offline and served on hosts like github and codeberg pages.
And only a 3.28 TB database? Oh, because it’s compressed. Includes comments too, though.
Yeah, it should inflate to 15TB or more I think
If only I had the space and bandwidth. I would host a mirror via Lemmy and drag the traffic away.
Actually, isn’t the a way to decentralize this that can be accessed from regular browsers on the internet? Live content here, archive everywhere.
Someone could format it into essentially static pages and publish it on IPFS. That would probably be the easiest “decentralized hosting” method that remains browsable
Yes! Too many comments to count in a reasonable amount of time!
Wow, great idea. So much useful information and discussion that users have contributed. Looking forward to checking this out.
thank you!!! i built on great ideas from others! i cant take all the credit 😋
Reddit is hot stinky garbage but can be useful for stuff like technical support and home maintenance.
Voat and Ruqqus are straight-up misinformation and fascist propaganda, and if you excise them from your data set, your data will dramatically improve.
the great part is that since everything is built it is easy to support any additional data! there is even an issue template to submit new data source! https://github.com/19-84/redd-archiver/blob/main/.github/ISSUE_TEMPLATE/submit-data-source.yml
I think this is a good use case for AI and Impressed with it. wish the instructions were more clear how to set up though.
thank you! the instruction are little overwhelming, check out the quickstart if you haven’t yet! https://github.com/19-84/redd-archiver/blob/main/QUICKSTART.md
Removed by mod
spoiler

Maybe read where OP says ‘Yes I used AI, English is not my first language.’ Furthermore, are ethnic slurs really necessary here?
Then he’s no better than Reddit who also uses AI no?
I disagree. I don’t like AI slop. But he’s using AI here in a way that is very much intended. I want to share something in Mandarin, I don’t know Mandarin. If only there was a way to transform my thoughts into Mandarin…
How many languages do you know fluently? I get that people have a definite opinion about AI. Like I told another Lemmy user, I have a definite opinion about the ‘arr’ stack which conservatively, 75% of selfhosters run. However, you don’t hear me out here beating my tin pan at the very mention of the ‘arr’ stack. Why? Because I assume you all are autonomous adults, capable of making your own decisions. Secondly, wouldn’t that get a bit tedious and annoying over time? If you don’t like AI, don’t use it ffs. Why castigate individuals who use AI? What does that do? I would really like to know what denigrating and browbeating users who use AI accomplishes.












