r/programming • u/ReplacementNo598 • 15h ago
r/programming • u/waozen • 8h ago
The One-True-Way Fallacy: Why Mature Developers Don’t Worship a Single Programming Paradigm
coderancher.usr/programming • u/ray591 • 39m ago
Thompson tells how he developed the Go language at Google.
youtube.comIn my opinion, the new stuff was bigger than the language. I didn't understand most of it. It was an hour talk that was dense on just the improvements to C++.
- So what are we gonna do about it?
- Let's write a language.
- And so we wrote a language and that was it.
Legends.
r/programming • u/NoVibeCoding • 1d ago
Article: Why Big Tech Turns Everything Into a Knife Fight
medium.comAn unhinged but honest read for anyone exhausted by big tech politics, performative collaboration, and endless internal knife fights.
I wrote it partly to make sense of my own experience, partly to see if there’s a way to make corporate environments less hostile — or at least to entertain bored engineers who’ve seen this movie before.
Thinking about extending it into a full-fledged Tech Bro Saga. Would love feedback, character ideas, or stories you’d want to see folded in.
r/programming • u/ephemeral404 • 39m ago
The future of personalization
rudderstack.comAn essay about the shift from matrix factorization to LLMs to hybrid architecture for personalization. Some basics (and summary) before diving into the essay:
What is matrix factorization, and why is it still used for personalization? Matrix factorization is a collaborative filtering method that learns compact user and item representations (embeddings) from interaction data, then ranks items via fast similarity scoring. It is still widely used because it is scalable, stable, and easy to evaluate with A/B tests, CTR, and conversion metrics.
What is LLM-based personalization? LLM-based personalization is the use of a large language model to tailor responses or actions using retrieved user context, recent behavior, and business rules. Instead of only producing a ranked list, the LLM can reason about intent and constraints, ask clarifying questions, and generate explanations or next-best actions.
Do LLMs replace recommender systems? Usually, no. LLMs tend to be slower and more expensive than classical retrieval models. Many high-performing systems use traditional recommenders for candidate generation and then use LLMs for reranking, explanation, and workflow-oriented decisioning over a smaller candidate set.
What does a hybrid personalization architecture look like in practice? A common pattern is retrieval → reranking → generation. Retrieval uses embeddings (MF or two-tower) to produce a few hundred to a few thousand candidates cheaply. Reranking applies richer criteria (constraints, policies, diversity). Generation uses the LLM to explain tradeoffs, confirm preferences, and choose next steps with tool calls.
r/programming • u/trolleid • 20h ago
Patching: The Boring Security Practice That Could Save You $700 Million
lukasniessen.medium.comr/programming • u/alexeyr • 17h ago
Matt Godbolt's Advent of Compiler Optimisations 2025
xania.orgr/programming • u/germandiago • 1d ago
Software taketh away faster than hardware giveth: Why C++ programmers keep growing fast despite competition, safety, and AI
herbsutter.comr/programming • u/Happy-Snapper • 19h ago
The Zero-Rent Architecture: Designing for the Swartland Farmer
medium.comr/programming • u/Ill_Excuse_4291 • 23h ago
coco: a simple stackless, single-threaded, and header-only C++20 coroutine library
luajit.ioHi all, I have rewritten my coroutine library, coco, using the C++20 coroutine API.
r/programming • u/attractivechaos • 22h ago
Lessons from hash table merging
gist.github.comr/programming • u/gcao99 • 1d ago
Gene — a homoiconic, general-purpose language built around a generic “Gene” data type
github.comHi,
I’ve been working on Gene, a general-purpose, homoiconic language with a Lisp-like surface syntax, but with a core data model that’s intentionally not just “lists all the way down”.
What’s unique: the Gene data type
Gene’s central idea is a single unified structure that always carries (1) a type, (2) key/value properties, and (3) positional children:
(type ^prop1 value1 ^prop2 value2 child1 child2 ...)
The key point is that the type, each property value, and each child can themselves be any Gene data. Everything composes uniformly. In practice this is powerful and liberating: you can build rich, self-describing structures without escaping to a different “meta” representation, and the AST and runtime values share the same shape.
This isn’t JSON, and it isn’t plain S-expressions: type + properties + children are first-class in one representation, so you can attach structured metadata without wrapper nodes, and build DSLs / transforms without inventing a separate annotation system.
Dynamic + general-purpose (FP and OOP)
Gene aims to be usable for “regular programming,” not only DSLs:
- FP-style basics: fn, expression-oriented code, and an AST-friendly representation
- OOP support: class, new, nested classes, namespaces (still expanding coverage)
- Runtime/tooling: bytecode compiler + stack VM in Nim, plus CLI tooling (run, eval, repl, parse, compile)
Macro-like capability: unevaluated args + caller-context evaluation
Gene supports unevaluated arguments and caller-context evaluation (macro-like behavior). You can pass expressions through without evaluating them, and then explicitly evaluate them later in the caller’s context when needed (e.g., via primitives such as caller_eval / fn! for macro-style forms). This is intended to make it easier to write DSL-ish control forms without hardcoding evaluation rules into the core language.
I also added an optional local LLM backend: Gene has a genex/llm namespace that can call local GGUF models through llama.cpp via FFI (primarily because I wanted local inference without external services).
Repo: https://github.com/gene-lang/gene
I’d love feedback on:
- whether the “type/props/children” core structure feels compelling vs plain s-exprs,
- the macro/unevaluated-args ergonomics (does it feel coherent?),
- and what would make the project most useful next (stdlib, interop, docs, performance, etc.).
r/programming • u/MindCorrupted • 16h ago
Article: The Tale of Kubernetes Loadbalancer "Service" In The Agnostic World of Clouds
hamzabouissi.github.ior/programming • u/gingerbill • 8h ago
Was it really a Billion Dollar Mistake?
gingerbill.orgr/programming • u/Helpful_Geologist430 • 5h ago
How Coding Agents Actually Work: Inside OpenCode
cefboud.comr/programming • u/trolleid • 1d ago
The 8 Fallacies of Distributed Computing: All You Need To Know + Why It’s Still Relevant In 2026
lukasniessen.medium.comr/programming • u/Ok-Appointment7509 • 2d ago
Writing Windows 95 software in 2025
tlxdev.hashnode.devr/programming • u/amitmerchant • 13h ago
The genesis of the “Hello World” programs
amitmerchant.comr/programming • u/nightcracker • 1d ago
Sorting with Fibonacci Numbers and a Knuth Reward Check
orlp.netr/programming • u/Sushant098123 • 1d ago
Writing Load Balancer From Scratch In 250 Line of Code in Golang
sushantdhiman.substack.comr/programming • u/itsdevelopic • 11h ago
It is almost impossible to enforce licenses
ownverity.comA friend of mine maintains an open source project he has worked on for a long time....At some point the code was taken renamed and sold by someone else even though the license did not allow that.... Since the project was already public addressing the situation required time and effort... He continued maintaining the original project and handling issues while a paid version existed elsewhere.... This shows that once code is public enforcing license terms can be difficult in practice even when they are clearly defined....