I'm Not Consulting an LLM

Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

Llm9p: LLM as a Plan 9 file system

I don't use LLMs for programming

Nuclear War: An LLM Scenario

LLMs turn an autistic communication style into a neurotypical conversation

Show HN: Mavera – Predict audience response with GANs, not LLM sentiment

Why most general-purpose Agents fail and why I'm avoiding LLM "reasoning"

LLM inference infrastructure for a systems audience

Secure LLM Scripting. Finally

FSF threatens Anthropic over infringed copyright: share your LLMs freely

Giving LLMs a personality is just good engineering

We gave the best LLMs Brainfuck, Befunge, Whitespace, Unlambda, and Shakespeare problems. Same logic as any coding interview. The best score was 11%.

Show HN: Context Gateway – Compress agent context before it hits the LLM

How we implemented weighted load balancing across LLM providers in Go

Chinese AI models censor content on behalf of the authoritarian regime – A comparison of Chines and non-Chinese LLMs shows "substantially higher rates of refusal to respond, shorter responses, and inaccurate responses to a battery of 145 political questions in China-originating models."

I traced every layer of the stack when you send a prompt to an LLM from keystroke to streamed token

Show HN: I logged Gemini's stock predictions for 38 days to study LLM drift

Ask HN: What Online LLM / Chat do you use?

A tool that removes censorship from open-weight LLMs

PlayerUnknown's Brendan Greene says AI content is ruining the internet because it's "a loop, LLMs are scanning this junk, and then that becomes truth… it's like a race to the middle of sh*t": "How can you trust stuff that says at the bottom you need to fact-check all the answers I'm giving you?"

AMD Ryzen AI NPUs Are Finally Useful Under Linux for Running LLMs

LLMs are bad at vibing specifications

Right-sizes LLM models to your system's RAM, CPU, and GPU

Running a One Trillion-Parameter LLM Locally on AMD Ryzen AI Max+ Cluster

LLM-driven large code rewrites with relicensing are the latest AI concern

The 4 Approaches to Using LLMs in Software Development

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

Show HN: 1v1 coding game that LLMs struggle with

More →