I feel like Wayne is taking the Agile Maxim “requirements always change” too literally. Agile doesn’t mean "every requirement always changes forever”.
In most live production environments today, requirements do keep changing — security, compliance, customer behavior, scaling — even when teams think they're done.
Agile isn’t making an empirical prediction ("all requirements will mutate endlessly"); it’s a philosophical posture toward uncertainty
Wayne misses this interpretative nuance.
cjfd 1 hours ago [-]
It seems the article is much wiser than this comment. It is a distinction without a difference whether 'requirements always change' or whether 'every requirement always changes forever'. If you don't know which requirement is going to change next week, it does not matter which of these two are true.
firesteelrain 36 minutes ago [-]
That’s fair from a practical design perspective. But, if you treat “requirements always change” as a hard truth rather than a heuristic or philosophical posture, you can end up in a mode where stability is viewed with suspicion, and architecture never settles. Some requirements do stabilize and knowing when that happens can help determine tradeoffs.
fredo2025 12 hours ago [-]
I agree with Wayne that the needs of the user don’t seem to end, even if your project or contract completes. Either the need is to keep maintaining it, put a twist on it, radically change it, or abandon it for something else.
I don’t agree on testing. It’s been a long time since I bought into that, and even tests for uncertain behavior to have confidence is a form of tech debt, as the developer that follows you must make a decision whether to maintain the test or to delete it; its value doesn’t usually last. An exception would be verifying expected behavior of a library or service that must stay consistent, but that is not the job of most developers.
tbrownaw 9 hours ago [-]
> Agile isn’t making an empirical prediction ("all requirements will mutate endlessly"); it’s a philosophical posture toward uncertainty
At some point this philosophy has to result in something concrete.
How much ongoing effort should be put into handling the possibility that this particular requirement might change?
Swizec 8 hours ago [-]
> How much ongoing effort should be put into handling the possibility that this particular requirement might change?
How likely is it that the world freezes and stops changing around your software? This includes business processes, dependencies, end-user expectations, regulations, etc.
In general that’s the difference between a product and a project. Even Coca Cola keeps tweaking its recipe based on ingredient availability, changes in manufacturing, price optimizations, logistics developments, etc.
Hell, COBOL and FORTRAN still get regular updates every few years! Because the software that runs on them continues to stay under active maintenance and has evolving needs.
rightbyte 6 hours ago [-]
> Even Coca Cola keeps tweaking its recipe
Ye and they should stop. Has there been any big changes except the "New Coke" that never reached my home town?
virgilp 4 hours ago [-]
As a general rule, one should presume that they now less about running a business than the people that actually do that. There are exceptions but as with all rules, exceptions don't invalidate the rule.
"they should stop" is a fine rant to express your personal taste preferences, but objectively speaking, I would bet on Coca-Cola having good reasons when tweaking the recipes. If that happens, it's probably more necessary than a layman realizes.
arkh 4 hours ago [-]
The result of Agile (and DDD, TDD etc.) comes back to The Mythical Man Month: you're gonna throw one away. So plan the system for change.
And due to Conway's law: plan the organization for change.
From those ideas you derive Agile (make an organization easily changeable) and the tactical part of DDD (all the code architecture meant to be often and easily refactored).
rTX5CMRXIfFG 9 hours ago [-]
You have to be able to distinguish between general and specific theories, so that you don’t expect the general to provide you the specific.
AndrewKemendo 10 hours ago [-]
Preface: A Formally verified end to end application with associated state machine(s) is kind of my engineering holy grail - so I’m a likely mark for this article.
However the author never actually makes a good case for FV other than to satisfy hard-core OCD engineers like ourselves. Maybe the author feels like we all know their opinion - but it seems like the author is arguing against a poster of claude shannon.
If the system is - for all intents and purposes - deterministically solving the subset of problems for the customer, and you never build the state machine, then who cares?
My argument is “there isn’t one” — that’s provided we’re in a business context where new features are ALWAYS more beneficial to the business inputs than formal verification.
If a business requirement requires formal verification then the argument is also moot - because it is part of the business requirement - and so it’s not optional it’s a feature.
Come to think of it I’m not really sure I’ve ever seen software created on behalf of a business, that has formal verification, where FM is not mandatory requirement of the application or it’s not a research project.
The last time I saw formal state machines built against a Formally Verfied system it was from a bored 50 year old unicorn engineer doing it on a simple C application.
ipaddr 10 hours ago [-]
This may seem counterintuitive but new features often alienate customers. It's not because of formal verification it's because a percentage of customers don't want change.
shoo 10 hours ago [-]
> we’re in a business context where new features are ALWAYS more beneficial to the business inputs than formal verification.
Another way of framing this is "what is the impact (to the business / to the customers / to the users) of shipping a defect?". In a lot of contexts the impact of shipping defects is relatively low -- say SaaS applications providing a non-critical service, where defects, once noticed, can usually be fixed by rolling back to the last good version server side. In some contexts the impact of shipping defects is very high, say if the defect gets baked into hardware and ships before it is detected, and fixing it would require a recall that would bankrupt the company, or if a defect could kill the customers/users or crash the space probe or so on.
AndrewKemendo 1 hours ago [-]
Right but then it’s a requirement, the author frames the argument around FV when it’s NOT a written requirement
I mentioned exactly that:
> If a business requirement requires formal verification then the argument is also moot - because it is part of the business requirement - and so it’s not optional it’s a feature.
xlii 8 hours ago [-]
> In some contexts the impact of shipping defects is very high (…)
I agree however I think that many overestimate how frequent those environments are. Almost everything can be updated (and that includes dumb appliances with hardware chips replaced by technician) and the only real question is what is your reliability vector.
At the end of the spectrum there’s Two Generals Problem and Space Bit Flip and so much complexity that’s mind blowing. I’ve seen on my own eyes industry wide screwups that were fixed with month full of phone calls and paper slips exchange, so it’s not like we (as humans) cannot live with unreliable systems.
I’ve been researching formal verification for a while and IMO they are not fit for general use through lack of ergonomics. I might have some ideas how to solve it but I rather try to put in a commercial box <insert dr evil meme>
hdjrudni 6 hours ago [-]
Things are often fixable, but if you keep breaking things for your user you're going to develop a reputation for being unstable and your customers will leave.
xlii 3 hours ago [-]
That's true, but "Real Metrics" matter.
Bugs are on the spectrum - some might increase resource usage, or some might crash for a percent of users. Some might always manifest for specific cohort of users and it might not be profitable to fix it for them. It's an ocean of possibilities between "perfect system" and "complete failure".
Sanity test are degrees of magnitude easier than FV and they can assure at least that.
LegionMammal978 33 minutes ago [-]
And even a "perfect system" might not be perfect for all users. I've seen lots of "old-school, battle-tested, rock-solid software" with bizarre behavior that supporters insist is an intended feature and can never be changed or configured, on account of it being convenient for some workflow back in the '80s or whatever. No system is so "perfect" that it can be all things to all people, unless it's truly trivial in scope.
In most live production environments today, requirements do keep changing — security, compliance, customer behavior, scaling — even when teams think they're done.
Agile isn’t making an empirical prediction ("all requirements will mutate endlessly"); it’s a philosophical posture toward uncertainty
Wayne misses this interpretative nuance.
I don’t agree on testing. It’s been a long time since I bought into that, and even tests for uncertain behavior to have confidence is a form of tech debt, as the developer that follows you must make a decision whether to maintain the test or to delete it; its value doesn’t usually last. An exception would be verifying expected behavior of a library or service that must stay consistent, but that is not the job of most developers.
At some point this philosophy has to result in something concrete.
How much ongoing effort should be put into handling the possibility that this particular requirement might change?
How likely is it that the world freezes and stops changing around your software? This includes business processes, dependencies, end-user expectations, regulations, etc.
In general that’s the difference between a product and a project. Even Coca Cola keeps tweaking its recipe based on ingredient availability, changes in manufacturing, price optimizations, logistics developments, etc.
Hell, COBOL and FORTRAN still get regular updates every few years! Because the software that runs on them continues to stay under active maintenance and has evolving needs.
Ye and they should stop. Has there been any big changes except the "New Coke" that never reached my home town?
"they should stop" is a fine rant to express your personal taste preferences, but objectively speaking, I would bet on Coca-Cola having good reasons when tweaking the recipes. If that happens, it's probably more necessary than a layman realizes.
And due to Conway's law: plan the organization for change.
From those ideas you derive Agile (make an organization easily changeable) and the tactical part of DDD (all the code architecture meant to be often and easily refactored).
However the author never actually makes a good case for FV other than to satisfy hard-core OCD engineers like ourselves. Maybe the author feels like we all know their opinion - but it seems like the author is arguing against a poster of claude shannon.
If the system is - for all intents and purposes - deterministically solving the subset of problems for the customer, and you never build the state machine, then who cares?
My argument is “there isn’t one” — that’s provided we’re in a business context where new features are ALWAYS more beneficial to the business inputs than formal verification.
If a business requirement requires formal verification then the argument is also moot - because it is part of the business requirement - and so it’s not optional it’s a feature.
Come to think of it I’m not really sure I’ve ever seen software created on behalf of a business, that has formal verification, where FM is not mandatory requirement of the application or it’s not a research project.
The last time I saw formal state machines built against a Formally Verfied system it was from a bored 50 year old unicorn engineer doing it on a simple C application.
Another way of framing this is "what is the impact (to the business / to the customers / to the users) of shipping a defect?". In a lot of contexts the impact of shipping defects is relatively low -- say SaaS applications providing a non-critical service, where defects, once noticed, can usually be fixed by rolling back to the last good version server side. In some contexts the impact of shipping defects is very high, say if the defect gets baked into hardware and ships before it is detected, and fixing it would require a recall that would bankrupt the company, or if a defect could kill the customers/users or crash the space probe or so on.
I mentioned exactly that:
> If a business requirement requires formal verification then the argument is also moot - because it is part of the business requirement - and so it’s not optional it’s a feature.
I agree however I think that many overestimate how frequent those environments are. Almost everything can be updated (and that includes dumb appliances with hardware chips replaced by technician) and the only real question is what is your reliability vector.
At the end of the spectrum there’s Two Generals Problem and Space Bit Flip and so much complexity that’s mind blowing. I’ve seen on my own eyes industry wide screwups that were fixed with month full of phone calls and paper slips exchange, so it’s not like we (as humans) cannot live with unreliable systems.
I’ve been researching formal verification for a while and IMO they are not fit for general use through lack of ergonomics. I might have some ideas how to solve it but I rather try to put in a commercial box <insert dr evil meme>
Bugs are on the spectrum - some might increase resource usage, or some might crash for a percent of users. Some might always manifest for specific cohort of users and it might not be profitable to fix it for them. It's an ocean of possibilities between "perfect system" and "complete failure".
Sanity test are degrees of magnitude easier than FV and they can assure at least that.