zeerg 5 hours ago

Hi HN, I'm Claudio. I built Readit (https://readit.md) because I was tired of keeping my system prompts and documentation in sync across different LLM chats.

Every time I started a new session for a project, I found myself manually copy-pasting the same stack definitions, coding guidelines, and API references. I wanted a way to pass a "state" to the agent via a single URL, without relying on custom GPTs or copy-paste fatigue.

HOW IT WORKS

Readit serves dynamic Markdown. You point the LLM to a URL, and it fetches a rendered context. Unlike a static Gist or Pastebin, Readit treats Markdown as a dynamic template:

- Templating: It uses Liquid to handle variables, loops, and logic.

- Transclusion: You can embed other markdown files (local or remote) directly into the main response.

- Searchable: The URL accepts query params (?q=...), allowing the server to filter content before rendering the markdown for the LLM.

THE TECH

The stack is Node.js, TypeScript and Fastify, paired with a React frontend. We rely on Postgres for data storage and to manage the recursive file structures. I am also currently working on integrating pgvector to enable semantic search capabilities. And a lot of coffee.

It's free to use. I'd love to hear your feedback on the architecture or if you find this "context-as-a-URL" approach useful for your workflows.

TRY THE META-DEMO (NO SIGNUP)

The documentation is hosted on a readit (of course). You can verify it by pasting the docs URL into ChatGPT, Claude, or Gemini and asking technical questions about the tool itself.

1) Copy the docs URL: https://readit.md/gi0wQgl6GoFx37MY/readit-docs

2) Paste into your LLM

3) Ask: "Write a Python script to push a commit log using the API described in these docs." or "Explain how the templating engine handles search results."