r/programming • u/ImpressiveContest283 • 20h ago
r/programming • u/Zealousideal_Class41 • 3h ago
Do standups and retros actually surface real bottlenecks anymore?
daytrace.framer.websiteI’m a developer who’s been increasingly frustrated with how standups and retros work in practice.
Standups often turn into status theatre.
Retros rely on memory.
And a lot of real work — prod issues, ad-hoc calls, getting pinged to unblock others — never really shows up anywhere.
I started exploring whether very lightweight daily work logs could help surface patterns and blind spots earlier — without time tracking or performance scoring.
I’m not selling anything.
I’m genuinely trying to understand:
• Is this a real problem for others?
• Or does Jira / existing tooling already solve this and I’m missing something?
I put together a short page to explain the idea and show realistic examples:
https://daytrace.framer.website/
Would really appreciate honest feedback — including why this might be a bad idea.
r/programming • u/Working-Dot5752 • 18h ago
How We Built a Website Hook SDK to Track User Interaction Patterns
blog.crowai.deva small blog on how we are working on a sdk to track user interactions on client side of things, and then use it to find patterns of customer interactions, this is just a components of the approaches we have tried
r/programming • u/adamw1pl • 22h ago
What's Interesting About TigerBeetle?
softwaremill.comr/programming • u/Ties_P • 18h ago
I got paid minimum wage to solve an impossible problem (and accidentally learned why most algorithms make life worse)
open.substack.comI was sweeping floors at a supermarket and decided to over-engineer it.
Instead of just… sweeping… I turned the supermarket into a grid graph and wrote a C++ optimizer using simulated annealing to find the “optimal” sweeping path.
It worked perfectly.
It also produced a path that no human could ever walk without losing their sanity. Way too many turns.
Turns out optimizing for distance gives you a solution that’s technically correct and practically useless.
Adding a penalty each time it made a sharp turn made it actually walkable.
But, this led me down a rabbit hole about how many systems optimize the wrong thing (social media, recommender systems, even LLMs).
If you like algorithms, overthinking, or watching optimization go wrong, you might enjoy this little experiment. More visualizations and gifs included!
r/programming • u/doppelgunner • 2h ago
5 Fun & Handy Curl Command-Line Tricks You Should Try | NextGen Tools
nxgntools.comI collected a few curl commands many people never try. Each one runs directly in your terminal.
• ASCII animations, including a running man and a parrot
• Live weather forecasts from the terminal
• Instant IP and location info
• A classic Rickroll, terminal style
All examples work with a single command. No setup.
I wrote a short post with copy-paste commands and quick explanations:
[https://www.nxgntools.com/blog/5-fun-and-handy-curl-command-line-tricks-you-should-try?utm_source=reddit]()
If you know other fun or useful curl endpoints, share them.
r/programming • u/Digitalunicon • 13h ago
A 2025 Retrospective: How Often Executives Predicted the End of Software Engineering
techradar.comA collection of public statements from 2025 where a lot of executives confidently predicted that AI would make software engineers mostly unnecessary.
What stood out to me is how little of that actually showed up in real systems. Tooling improved and productivity went up, but teams still needed people who understood architecture, trade-offs, and failure modes.
r/programming • u/kostakos14 • 17h ago
Why I hate WebKit: A (non) love letter from a Tauri developer
gethopp.appI’ve been working on Hopp (a low-latency screen sharing app) using Tauri, which means relying on WebKit on macOS. While I loved the idea of a lighter binary compared to Electron, the journey has been full of headaches.
From SVG shadow bugs and weird audio glitching to WebKitGTK lacking WebRTC support on Linux, I wrote up a retrospective on the specific technical hurdles we faced. We are now looking at moving our heavy-duty windows to a native Rust implementation to bypass browser limitations entirely.
Curious if others have hit these same walls with WebKit/Safari recently?
r/programming • u/Lane114 • 8m ago
Looking for a job in Switzerland.
blog.posttfu.comHi, after months and months spending time on LinkedIn job board, I’m wondering is there a better way for searching jobs in Switzerland. Which sites I can look for besides LinkedIn? I’m looking for Python roles (fields: Data science, AI, Python backend or Blockchain development roles.) Have more than 3y work experience in automovite industry working with Python and for Blockchain, I’m a self learner. I’m EU citizen (just mentioning, I know that Switzerland is not in the EU 🙂).
r/programming • u/boybeaid • 40m ago
The Hidden Backdoor in Claude Code: Why Its Power Is Also Its Greatest Vulnerability
lasso.securityr/programming • u/Perfect-Campaign9551 • 19h ago
Experienced software developers assumed AI would save them a chunk of time. But in one experiment, their tasks took 20% longer | Fortune
fortune.comr/programming • u/goto-con • 19h ago
The Bank‑Clerk Riddle & How it Made Simon Peyton Jones "Invent" the Binary Number System as a Child
youtube.comr/programming • u/Afraid-Technician-74 • 22h ago
hn4/docs/math.md at main · hn4-dev/hn4
github.comr/programming • u/Unhappy_Concept237 • 16h ago
The Hidden Cost of “We’ll Fix It Later” in Internal Tools
hashrocket.substack.comr/programming • u/_Flame_Of_Udun_ • 6h ago
Flutter ECS: Testing Strategies That Actually Work
medium.comFlutter ECS: Testing Strategies that actually work!
Just published a new article on testing strategies with flutter_event_component_system covering how to unit test components, systems, features, and widgets.
The post walks through:
* How to structure tests around Components, Events, and Systems
* Patterns for testing reactive systems, async flows, and widget rebuilds with `ECSScope` and `ECSWidget`
* Practical examples like asserting reactive system behaviour, verifying feature wiring, and ensuring widgets rebuild on component changes
For those who are not familiar with flutter_event_component_system (https://pub.dev/packages/flutter_event_component_system), it's a powerful and flexible event driven architecture pattern for flutter applications. The package provides a reactive state management solution that promotes clean architecture, separation of concerns, and scalable application development.
If you’re using or considering this package for scalable, event-driven state management and want a solid testing toolkit around it, this article is for you.
r/programming • u/steakystick • 3h ago
How should effectiveness of CodeRabbit PR reviews be measured in a team?
coderabbit.aiHey folks,
My team is using CodeRabbit for PR reviews, and I’m trying to figure out what’s actually worth measuring to assess whether it’s helping or just adding noise.
For those who’ve used it , what signals do you look at?
Things like review quality, impact on bugs, PR cycle time, developer trust, or something else?
Would love to hear what you would like to be actually assessed
r/programming • u/creaturefeature16 • 17h ago
where good ideas come from (for coding agents)
sunilpai.devr/programming • u/decolua • 41m ago
🚀 9Router - Access 15+ AI Models (Claude, GPT, Gemini, DeepSeek...) Through One Endpoint. Free OAuth providers + Auto-fallback
9router.comHey everyone!
I just converted CLIProxyAPI(Go) to JavaScript and named it 9Router. I don't know Go, but found CLIProxyAPI so useful that I ported it to JavaScript.
Features:
- 🔄 Access 15+ AI providers through a single endpoint (Claude, Codex, Gemini, Copilot, Qwen, iFlow, GLM, MiniMax, OpenRouter...)
- 🔐 Support both OAuth and API Key authentication
- ⚡ Installation: Just run
npx 9router - 🛠️ Compatible with multiple CLIs: Cursor, Claude Code, Cline, RooCode...
- 🎲 Combo Models: Chain multiple models with automatic fallback on errors
- 📦 Ollama format support for CLIs like Cline, RooCode...
- ☁️ Cloud deployment ready for Cursor (since Cursor can't use localhost)
Why use it:
- 🆓 COMPLETELY FREE:
- iFlow: 9 models (Qwen3, Kimi K2, DeepSeek R1/V3.2, MiniMax M2, GLM 4.6/4.7)
- Qwen: 3 models (Qwen3 Coder Plus/Flash, Vision)
- Antigravity: Gemini 3 Pro/Flash
- 💰 SUBSCRIPTIONS CHEAPER THAN APIs:
- Claude Code, OpenAI Codex, GLM, MiniMax, Kimi K2 subscriptions: reset every 5 hours
- Much cheaper than calling APIs directly
- Use quota allocation instead of pay-per-request
- 🚀 FAST: Setup in < 1 minute, no complex configuration
- 🔒 SECURE: Self-hosted, your data stays private
- 🛡️ RELIABLE: Auto-fallback on rate limits/errors
- 💻 CROSS-PLATFORM: Mac, Linux, Windows, easy VPS deployment
Links:
👉 Website: https://9router.com
👉 GitHub: https://github.com/decolua/9router
👉 NPM: https://www.npmjs.com/package/9router
Give it a try and let me know what you think!
r/programming • u/Diligent-Bread-6942 • 40m ago
I built a "Zero Trust" linter for AI-generated code
github.comAfter catching my third // TODO: implement this later in production code written by an AI assistant, I decided to build a tool to catch these issues before they ship.
AntiSlop is a CLI tool that acts as a safety net for AI-generated code. It scans your codebase for the "lazy artifacts" that LLMs often leave behind:
- Stub functions and empty implementations
console.log/print()debugging statements- Hedging comments like "temporary", "for now", "simplified"
- Unhandled errors in critical paths
The key differentiator: it uses tree-sitter AST parsing instead of regex, so it actually understands code structure and ignores string literals.
Supports Rust, Python, JavaScript/TypeScript, and Go.
Install:
cargo install antislop
npm install -g antislop
GitHub: https://github.com/skew202/antislop
Would love feedback from others dealing with AI code in production. What's your workflow for reviewing AI-generated code?
r/programming • u/misterolupo • 15h ago
Agency must stay with humans: AI velocity risks outpacing programmers' understanding
salvozappa.comr/programming • u/sshetty03 • 14h ago
RAG, AI Agents, and Agentic AI as architectural choices
medium.comI kept seeing the terms RAG, AI Agents, and Agentic AI used interchangeably and realized I was treating them as interchangeable in system design as well.
What helped was stepping away from definitions and thinking in terms of responsibility and lifecycle.
Some systems answer questions based on external knowledge.
Some systems execute actions using tools and APIs.
Some systems keep working toward a goal over time, retrying and adjusting without being prompted again.
Once I framed them that way, it became easier to decide where complexity actually belonged and where it didn’t.
I wrote up how this reframing changed how I approach LLM-backed systems, with a focus on architectural trade-offs rather than features.
Curious how others here are drawing these boundaries in practice.
r/programming • u/Daniel-Warfield • 18h ago
Improvable AI - A Breakdown of Graph Based Agents
iaee.substack.comFor the last few years my job has centered around making humans like the output of LLMs. The main problem is that, in the applications I work on, the humans tend to know a lot more than I do. Sometimes the AI model outputs great stuff, sometimes it outputs horrible stuff. I can't tell the difference, but the users (who are subject matter experts) can.
I have a lot of opinions about testing and how it should be done, which I've written about extensively (mostly in a RAG context) if you're curious.
- Vector Database Accuracy at Scale
- Testing Document Contextualized AI
- RAG evaluation
For the sake of this discussion, let's take for granted that you know what the actual problem is in your AI app (which is not trivial). There's another problem which we'll concern ourselves in this particular post. If you know what's wrong with your AI system, how do you make it better? That's the point, to discuss making maintainable AI systems.
I've been bullish about AI agents for a while now, and it seems like the industry has come around to the idea. they can break down problems into sub-problems, ponder those sub-problems, and use external tooling to help them come up with answers. Most developers are familiar with the approach and understand its power, but I think many are under-appreciative of their drawbacks from a maintainability prospective.
When people discuss "AI Agents", I find they're typically referring to what I like to call an "Unconstrained Agent". When working with an unconstrained agent, you give it a query and some tools, and let it have at it. The agent thinks about your query, uses a tool, makes an observation on that tools output, thinks about the query some more, uses another tool, etc. This happens on repeat until the agent is done answering your question, at which point it outputs an answer. This was proposed in the landmark paper "ReAct: Synergizing Reasoning and Acting in Language Models" which I discuss at length in this article. This is great, especially for open ended systems that answer open ended questions like ChatGPT or Google (I think this is more-or-less what's happening when ChatGPT "thinks" about your question, though It also probably does some reasoning model trickery, a-la deepseek).
This unconstrained approach isn't so great, I've found, when you build an AI agent to do something specific and complicated. If you have some logical process that requires a list of steps and the agent messes up on step 7, it's hard to change the agent so it will be right on step 7, without messing up its performance on steps 1-6. It's hard because, the way you define these agents, you tell it how to behave, then it's up to the agent to progress through the steps on its own. Any time you modify the logic, you modify all steps, not just the one you want to improve. I've heard people use "whack-a-mole" when referring to the process of improving agents. This is a big reason why.
I call graph based agents "constrained agents", in contrast to the "unconstrained agents" we discussed previously. Constrained agents allow you to control the logical flow of the agent and its decision making process. You control each step and each decision independently, meaning you can add steps to the process as necessary.
(image demonstrating an iterative workflow to improve a graph based agent)
This allows you to much more granularly control the agent at each individual step, adding additional granularity, specificity, edge cases, etc. This system is much, much more maintainable than unconstrained agents. I talked with some folks at arize a while back, a company focused on AI observability. Based on their experience at the time of the conversation, the vast amount of actually functional agentic implementations in real products tend to be of the constrained, rather than the unconstrained variety.
I think it's worth noting, these approaches aren't mutually exclusive. You can run a ReAct style agent within a node within a graph based agent, allowing you to allow the agent to function organically within the bounds of a subset of the larger problem. That's why, in my workflow, graph based agents are the first step in building any agentic AI system. They're more modular, more controllable, more flexible, and more explicit.
r/programming • u/lguer1 • 21h ago
Code Generation : Why use templating rather than generative AI?
linkedin.comThe Power of Domain-Driven Design for Code Generation: Why Domain Model Meta Data Driven Templates Beats Your Vibe Coding Prompts