Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: SmallDocs – Markdown without the frustrations
86 points by FailMore 21 hours ago | hide | past | favorite | 42 comments
Hi HN, I’d like to introduce you to SmallDocs (https://sdocs.dev). SDocs is a CLI + webapp to instantly and 100% privately elegantly preview and share markdown files. (Code: https://github.com/espressoplease/SDocs)

The more we work with command line based agents the more `.md` files are part of our daily lives. Their output is great for agents to produce, but a little bit frustrating for humans: Markdown files are slightly annoying to read/preview and fiddly to share/receive. SDocs was built to resolve these pain points.

If you `sdoc path/to/file.md` (after `npm i -g sdocs-dev`) it instantly opens in the browser for you to preview (with our hopefully-nice-to-look-at default styling) and you can immediately share the url.

The `.md` files our agents produce contain some of the most sensitive information we have (about codebases, unresolved bugs, production logs, etc.). For this reason 100% privacy is an essential component of SDocs.

To achieve this SDoc urls contain your markdown document's content in compressed base64 in the url fragment (the bit after the `#`):

https://sdocs.dev/#md=GzcFAMT...(this is the contents of your document)...

The cool thing about the url fragment is that it is never sent to the server (see https://developer.mozilla.org/en-US/docs/Web/URI/Reference/F...: "The fragment is not sent to the server when the URI is requested; it is processed by the client").

The sdocs.dev webapp is purely a client side decoding and rendering engine for the content stored in the url fragment. This means the contents of your document stays with you and those you choose to share it with, the SDocs server doesn't access it. (Feel free to inspect/get your agent to inspect our code to confirm this!)

Because `.md` files might play a big role in the future of work, SDocs wants to push the boundaries of styling and rendering interesting content in markdown files. There is much more to do, but to start with you can add complex styling and render charts visually. The SDocs root (which renders `sdoc.md` with our default styles) has pictures and links to some adventurous examples. `sdoc schema` and `sdoc charts` provides detailed information for you or your agent about how how make the most of SDocs formatting.

If you share a SDocs URL, your styles travel with it because they are added as YAML Front Matter - https://jekyllrb.com/docs/front-matter/ - to the markdown file. E.g.:

   ---

   styles:

     fontFamily: Lora

     baseFontSize: 17

     ...

   ---
At work, we've been putting this project to the test. My team and I have found SDocs to be particularly useful for sharing agent debugging reports and getting easily copyable content out of Claude (e.g. a series of bash commands that need to be ran).

To encourage our agents to use SDocs we add a few lines about them in our root "agent files" (e.g. ~/.claude/CLAUDE.md or ~/.codex/AGENTS.md). When you use the cli for the first time there is an optional setup phase to do this for you.

I'm of course very interested in feedback and open to pull requests if you want to add features to SDocs.

Thank you for taking a look!

 help



I am excited about this really cool idea. I read the update, but does it mean there are two approaches: one where you pack all the content into fragments, and another where you encrypt it on the client side, save it to the server, and reduce the content to data containing only the key?

Also, wouldn't it be better if the encryption and fragmented compression could also be handled on the web client side?


That’s exactly what happens. We encrypt client side. The sever only gets encrypted data. You can inspect the network request to see what the sever receives. Your decryption key always stays client side (in the url fragment).

Also, the short URL (which saves the content in cypher format on the server) is optional. It only triggers if you specifically trigger it (by clicking “Generate” at the top of the rendered document). If you don’t click that everything stays client side in the fragment.


A little update: I added privacy-focused optional shorter URLs to SDocs.

You can read more about the implementation here: https://sdocs.dev/#sec=short-links

Briefly:

  https://sdocs.dev/s/{short id}#k={encryption key}
                      └────┬───┘   └───────┬──────┘
                           │                │
                      sent to           never leaves
                       server           your browser

We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).

You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.


> Markdown files are slightly annoying to read/preview

Maybe I’ve missed the intentions of markdown, but the ability to easily read the plain text version has always been the killer feature.

Rendering as html is a nice bonus.

I understand there are plenty of useful things to say “but what about…” to, like inline images, and I use them. But they still detract from what differentiated markdown in the first place.

The more of that you add, the more it could have been any document format.


I feel like things have changed as the main interface for code has (for some) become an agent running in the cli. I feel like we (certainly I) check my code editor way less frequently than before. Because of that (for me) easily reading/rendering Markdown files has become more of a pain than it used to be.

If you problem is that you don't have a text editor open...just open it to read the file? When I click on a Markdown file, it opens in my code editor with the preview pane already open. Then I close it when I'm done. How is that different from any other file on a computer?

If your problem is that you don't want to use a text editor, there are many markdown viewers out there, both dedicated (MarkLite), as part of a larger tool (Obsidian) or even in an office suite (LibreOffice Writer).

If your problem is that you don't want fo leave the terminal, there are many command line markdown "renderers", at least as far as that is even technically possible (glow is markdown-specific, bat is more of a general fancy text file viewer).

I fail to see how any of these problems are even partially solved by a web app and a CLI tool that launches it, let alone any better than the existing solutions.


I can't believe I'm saying this, but this should be an Electron app. Or Tauri or whatever.

Seriously, it's a really nice Markdown app, but the "launch a CLI to urlencode your file" flow is such a messy way of doing this. Just open the file like any other app. Sure, the web version is convenient for demonstrating to people or one-off use, but it's no way to work day to day.

As for the motivation... "Fiddly to send/receive"?? Just send the file like you would any other. Don't you have to send other files? So you already have a way of doing this. Just do that. Bonus points for being able to easily receive an edited file, make diffs, etc., as well as the person on the other side being able to use whatever viewer/editor they prefer, not the one you pushed onto them.

How is sending a GIGANTIC link any better? If your file is nearly as long as the shit my LLMs write, you'll reach the chat character limit on most platforms, even though the file itself is well within file upload limits. And now there's a link shortener to solve this problem, which just defeats the purpose of it being offline and independent of a cloud service.


URL data sites are always very cool to me. The offline service worker part is great.

The analytics[1] is incredible. Thank you for sharing (and explaining)! I love this implementation.

I'm a little confused about the privacy mention. Maybe the fragment data isn't passed but that's not a particularly strong guarantee. The javascript still has access so privacy is just a promise as far as I can tell.

Am I misunderstanding something and is there a stronger mechanism in browsers preserving the fragment data's isolation? Or is there some way to prove a url is running a github repo without modification?

[1]:https://sdocs.dev/analytics


Thanks for the kind words re the analytics!

You are right re privacy. It is possible to go from url hash -> parse -> server (that’s not what SDocs does to be clear).

I’ve been thinking about how to prove our privacy mechanism. The idea I have in my head at the moment is to have 2+ established coding agents review the code after every merge to the codebase and to provide a signal (maybe visible in the footer) that, according to them it is secure and the check was made after the latest merge. Maybe overkill?! Or maybe a new way to “prove” things?? If you have other ideas please let me know.


How about simply making the website an app and have it load your makedown file with a button and file browser. Just like e.g. https://app.diagrams.net/

And I believe you can then tell the browser that you need no network communication at that point. And a user can double check that.


No, I don't have any good ideas. Just hoping someone else does, or that I'm missing something.

I think it's in the hands of browser vendors.

The agent review a la socket.dev probably doesn't address all the gaps. I think you're already doing about as much as you reasonably can.


Thanks. The question has made me wonder about the value of some sort of real time verification service.

If it's possible to isolate that part of the code, and essentially freeze it for long periods. At least people would know it wasn't being tweaked under them all the time.

That is my half of a bad idea.


I have something coming out soon (just working on it). Your client (browser) has hashing algos built into it. So the browser can run a hash of all the front end assets it serves. Every commit merged into main will cause a hash of all the public files to be generated. We will allow you to compare the hashes of the front end files in your browser with the hashes from the public GH project. Interested to know what you think...

For the "prove the server doesn't touch the data" problem — the realistic path today is probably reproducible builds + published bundle hashes.

  Concretely: the sdocs.dev JS bundle should be byte-for-byte reproducible                                                                                                         
  from a clean checkout at a given commit. You publish { gitSha, bundleSha256 }
  on the landing. Users (or agents) can compute the hash of what their browser                                                                                                     
  actually loaded (DevTools → Sources → Save As → sha256) and compare.                                                                                                             
   
  That closes the "we swapped the JS after deploy" gap. It doesn't close                                                                                                           
  "we swapped it between the verification moment and now" — SRI for SPA
  entrypoints is still not really a thing. That layer is on browser vendors.                                                                                                       
                                                                                                                                                                                   
  The "two agents review every merge" idea upthread is creative, but I worry                                                                                                       
  that once the check is automated people stop reading what's actually                                                                                                             
  verified. A dumb published hash is harder to fake without getting caught.                                                                                                        
                                                                                                                                                                                   
  (FWIW, working on a similar trust problem from the other end — a CLI + phone                                                                                                     
  app that relays AI agent I/O between a dev's machine and their phone                                                                                                             
  [codeagent-mobile.com]. "Your code never leaves your machine" is easy to                                                                                                         
  say, genuinely hard to prove.)

My solution is now live at https://sdocs.dev/trust. Open to feedback

That's basically exactly what I'm working on now actually. We will let you compare all the publicly served files with their hashes on github

Ya. I could imagine a browser extension performing some form of verification loop for simpler webpages. Maybe too niche.

Nice implementation — the URL fragment trick for privacy is clever.

Related pattern I've leaned into heavily: treating .md files as structured state the agent reads back, not just output. YAML frontmatter parsed as fields (status, dependencies, ids), prose only in the body. Turns them from "throwaway outputs" into state the filesystem enforces across sessions — a new session can't silently drift what was decided in the previous one.

Your styling-via-frontmatter is the same mechanism applied to presentation. Have you thought about a read mode that exposes the frontmatter as structured data, for agents that consume sdoc URLs downstream?


I think the next thing I want to do (but not sure how to implement yet) is to make it easy for your agent to go from SDocs url to content. I don't know if that's via curl or a `sdoc` command, or some other way... That could include the styling Front Matter / the agent could specify it.

At the moment the most efficient way to get sdocs content into an agent is to copy the actual content. But I think that's not too beautiful.


https://sdocs.dev/trust Now lets you verify you're being served the actual open source code

i also used fragment technique for sharing html snippets but url's became very long, i had to implement optional url shortener after users complained. Unfortunately that meant server interaction.

https://easyanalytica.com/tools/html-playground/


(I left a stand alone comment, but:) A little update: I added privacy-focused optional shorter URLs to SDocs.

You can read more about the implementation here: https://sdocs.dev/#sec=short-links

Briefly:

  https://sdocs.dev/s/{short id}#k={encryption key}
                      └────┬───┘   └───────┬──────┘
                           │                │
                      sent to           never leaves
                       server           your browser

We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).

You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.


Really nice implementation by the way.

Re URL length: Yes... I have a feeling it could become an issue. I was wondering if a browser extension might give users the ability to have shorter urls without losing privacy... but haven't looked into it deeply/don't know if it would be possible (browser extensions are decent bridges between the local machine and the browser, so maybe some sort of decryption key could be used to allow for more compressed urls...)


i doubt it would be possible, it boils down to compression problem compressing x amount of content to y bits, since content is unpredictable it cannot be done without having intermediary to store it.

For this use-case, maybe compression and then encoding would get more data into the URL before you hit a limit (or before users complain)?

I.e. .md -> gzip -> base64


This is a neat tool. I always had to manually copypaste longs texts into notepad and convert it into md format. Obvisouly i couldn't parse complex sites with lots of images or those that had weird editing. this will be useful

Thank you. If you use an AI agent you might be able to tell it to curl the target website, extract the content into a markdown file and then sdoc it. It might have some interesting ideas with images (using the hosted URLs or hosting them yourself somehow)

Soon... there are 15 competing standards.

Cool project. Heads up - there’s a commercial company with a very similar name that might decide to hassle you about it:

https://www.sdocs.com/


Thanks + thanks for the heads up. I will see what happens. It's a domain-name war out there!

In the spirit of r/IllegallySmolCats, perhaps SmolDocs is a possible option.

Using fragments for secure data has been discussed before on hn: https://news.ycombinator.com/item?id=23036515. Tldr: it may not go directly to the server (unless you are using a buggy browser or web client) but the fragment is captured in several places.

Nice, I've also built something like this we use internally. Will it reduce token consumption as well?

Thanks. Re tokens reduction: not that I’m aware of. Would you mind explaining how it might? That could be a cool feature to add

Markdown style editing looks very easy and convenient

Thanks! One potential use case I have for it is being able to make "branded" markdown if you need to share something with a client/public facing.

I had not heard of url fragments before. Is there a size cap?

Ish, but the cap is the length of url that the browser can handle. For desktop chrome it's 2MB, but for mobile Safari its 80KB.

The compression algo SDocs uses reduces the size of your markdown file by ~10x, so 80KB is still ~800KB of markdown, so fairly beefy.


It's 2^16=65,536 bytes for Firefox

Hadn’t heard of it either - very smart, could open lots of other privacy-friendliness-improved „client-based web“ apps

TYVM. Yeah, I am curious to explore moving into other file formats like CSVs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: