Loading...

Tag trends are in beta. Feedback? Thoughts? Email me at [email protected]

Show HN: A deterministic middleware to compress LLM prompts by 50-80%

The L in "LLM" Stands for Lying

LLM Doesn't Write Correct Code. It Writes Plausible Code

iPhone 17 Pro Demonstrated Running a 400B LLM

2% of ICML papers desk rejected because the authors used LLM in their reviews

LLM Embeddings Explained: A Visual and Intuitive Guide

LLM Architecture Gallery

Are LLM merge rates not getting better?

LLMs can unmask pseudonymous users at scale with surprising accuracy

A Portrait of the Artist as an LLM

LLMs can be exhausting

LLM Writing Tropes.md

Pool spare GPU capacity to run LLMs at larger scale

EsoLang-Bench: Evaluating Genuine Reasoning in LLMs via Esoteric Languages

LLMs predict my coffee

Covenant-72B is the largest decentralized LLM pre-training run in history

LLM 'benchmark' as a 1v1 RTS game where models write code controlling the units

LLMs learn what programmers create, not how programmers work

BitNet: Inference framework for 1-bit LLMs

How I write software with LLMs

LessWrong Policy on LLM Use

Sarvam 105B, the first competitive Indian open source LLM

Reliable Software in the LLM Era

A Visual Guide to Attention Variants in Modern LLMs

Wikipedia RFC on banning LLM contributions

Aura-State: Formally Verified LLM State Machine Compiler

Ask HN: How do you deal with people who trust LLMs?

LLMs work best when the user defines their acceptance criteria first

LLM time

A survey on LLMs for spreadsheet intelligence

More →