LLM-driven large code rewrites with relicensing are the latest AI concern

The 4 Approaches to Using LLMs in Software Development

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

Show HN: 1v1 coding game that LLMs struggle with

Applying Nyquist-Shannon sampling theorem to LLM prompt engineering β€” empirical results from 10 head-to-head battles

Build the RAG with Golang and Local LLM

OSS Coding Agent with Self-hosted LLM: own the whole stack with Opencode and vLLM

The Big LLM Architecture Comparison

Can LLM write a moderately complex new language compiler?

Comparing JS framework LLM token costs using the examples at component-party.dev

Proposed architecture separating reasoning from language interface in LLMs β€” LSM/LTM framework

Simple Made Inevitable: The Economics of Language Choice in the LLM Era

Detecting LLM-Generated Web Novels Using "Classical" Machine Learning (AIGC Text Detection)

Soup β€” fine-tune any LLM in one command.

We precompile our DB schema so the LLM agent stops burning turns on information_schema

PhrasePoP – An open-source AI rephrasing utility built with Rust & Tauri (Local LLM + API support)

I built Reviva inπŸ¦€after failing to get reliable local LLM code review for my security project

Why I filter logs with regex before sending them to an LLM β€” and why it makes AI analysis dramatically better

I built DocDrift: A pre-commit hook that uses Tree-sitter + Local LLMs to fix stale READMEs

Sense: LLM-powered test assertions and structured text extraction for Go

I built an open-source AI gateway in Go β€” routes, rate-limits, and secures LLM traffic across providers

AXIOM: Built a sparse dynamic routing architecture for LLM inference entirely in Rust. No ML frameworks, no GPU, 1.2M parameters

Use Rust code with LLM agent frameworks

I built an LLM testing library for Python β€” LLMTest

Flask email classifier powered by LLMs β€” dashboard UI, 182 tests, no frontend build step

**I made a "Folding@home" swarm for local LLM research**

LLMAssert: pytest plugin scores LLM output locally. 0.0 variance across 100 runs.

Track real-time GPU and LLM pricing across all cloud and inference providers

I used asyncio and dataclasses to build a "microkernel" for LLM agents β€” here's what I learned

I built a Rust library for LLM code execution in a sandboxed Lua REPL

More →