Someone tell my boss this, they don’t understand agile. They think we can “start the process” of developing a solution before we’ve understood a single thing about what the customer needs.
And it’s not that we don’t have 100% of the requirements either. It’s basically a we don’t talk to the customer or perform market research to know where we should take the product, so I’m going to make up features at an absurdly abstract level and no you don’t need to meet to talk about it, just start working. “The requirements will come later”, they say. From whom exactly? 🤔
I feel this pain. My last assignment we went through all the process of grooming and story pointing, but the documentation was voluminous and we were all expected to be 100% conversant in each story. Then a story would take hours to groom and the testers weren’t exactly sure how to test it, so the developers would have to basically write the end-to-end test cases along with detailed explanations of what could and couldn’t be tested (arbitrary xml format, so some things you had to change independently to test, while other things would never be changed independently in prod so they weren’t valid to test. Also, sometimes the correct thing to do would be determined by the database schema (is nullable?) which would lead to vastly different behaviors.
And lastly, we had to commit to delivering these features on time and to let everyone know ASAP if a timeline was under threat. Well, sometimes I’d tell them the timeline was under threat before the sprint was even started. That pissed them off something fierce because then it made us look incompetent to the customer - well… if the truth hurts, fix something in leadership. Also we were expected to do “whatever it takes” to deliver stories on time. Whatever it takes means skipping the grooming stories and reading the documentation for the next feature because I have to get this feature out the door.
I’ve never worked in the game development industry, but stories like this are what has kept me out of it. “Crunch time” that lasted months? Fuck that. I’m fifty years old with 25 years experience. I have kids at home. I work my ass off for about 9 hours a day and every once in a while a little more, but I’m not about that fucking lifestyle over shitty business software to make a shitty business more shitty money.
So needless to say I’m between jobs at the moment… I’m so over terrible leadership.
I can sympathize with a lot of the struggles you’ve endured. I deal with a lot of the same things and I’ve started to look elsewhere because for a company of 15, this place is run like a train wreck.
Just got off my stand up meeting too, we have to cancel todays grooming session because the project manager is on vacation. How will we survive without them repeating buzzwords and nodding like they understand what we’re talking about??? No shared responsibility for this team. It’s back to the waterfall method; and my boss likes throwing rocks off of the cliff. Every man for themself!
okay, I’ve finished your software, it meets your zero requirements!
“Add support for another vendor”
“Okay I understood what you wanted perfectly without any context, I’ve also compiled, pushed, and upgraded your customers”
Some people think it will really be that easy. Maybe 50 years from now, but not with these generative models lmao.
Do you work with me? My boss refuses to do a market research, refuses to check our website usage, change features that users actually care about, because he doesn’t seem value on it.
And good lord, i have to create a dashboard to show our analytics, but he doesn’t talk exactly what is needed. I spoke multiple times, said what i needed from him and he still doesn’t know
I might as well! Our website hasn’t been updated since 2016 and we don’t know what customers already have a solution to the one we sell. They’re running on decade old market research, and have pretty much abandoned the idea of getting large customers since our sales team (which is also my bosses, we don’t have a dedicated sales position) doesn’t perform. Instead my bosses are chasing after smaller customers that provide smaller profit margins?? And this is all on the assumption that they’ll be interested in our product when federal subsidies start to be handed out to our industries customers.
In short, we’re fucked.
Analytics? Customer empathy gathering? Market research? Why bother? They just saw a post on LinkedIn about a Blockchain ChatGPT AI Machine Learning NFT. You really need to keep your eye on the ball on how we can work together to shoehorn any of that in the product so I can seem smart posting it on LinkedIn too. I’m never gonna hit CEO gaining market insight. Gotta fleece everyone with the most fancy sounding thing I have no clue about today. Don’t worry. I’ll forget by the time you have it released and complain about the new maintenance burden though. We need more features!!!
You joke but one of my bosses did request we implement ChatGPT on our platform. It’s a recruitment platform, and he wanted for ChatGPT to write the job descriptions…
And it’s not like it has zero application. But vaguely gesturing at a trendy technology and saying “we sure gotta leverage this guy’s” is not a feature. Why are you doing this? What is the downside? Should we be doing this? Let’s do a ChatGPT is not a strategy.
“We shall make solutions to problems people don’t even know they have!”
“What people?”
“All people! Including us!”
THIS
Exactly. We’re not inventing an iPhone or anything. Extremely niche software that is losing market space every month while the PM’s sit around and prioritize the wrong things, there’s no strategy.
Ah, yes. The problem commonly known as PM-who’s-also-the-BPC-but-also-doesn’t-understand-what-either-role-actually-does, or “PMWATBPCBADUWERAD” for short.
I’ll do you one better. The hardest part of making crap people like is the damn people. I have been a product manager for a decade and I can confidently say if I deliver exactly what the customer asks for I would be an utter failure. Requirements and software that fulfill what a customer says they want will ultimately lead to them asking for something they previously didn’t realize because it actually turns out they have no idea what they want, have an agenda, or the conditions have shifted from under you and what they said no longer holds water.
I could go on a tirade about this but my two cents is you gotta listen to what everyone says, but assume they are a human at the end of the day. It’s too damn easy for me to suck up dev time with what people want. Hell, just one word can keep a dev team busy for a long time. Internationalization! Boo!
I also need to build an environment where the dev team doesn’t despise the business due to a history of constantly shifting goalposts, borderline abusive metrics, and expectations that just create a battered development team. For some reason hiring a PM aligns with an org hitting the point where the original dev team has lost critical members because of terrible burnout and a culture of blaming people and not process. Takes a lot of therapeutic communication to remedy that.
TLDR; People. People are the reason all things are difficult.
I’m almost 40 and very slowly educating myself toward a CS degree outside of work. I feel like I’m so far behind you guys that my only way into the tech industry with decent compensation (>100k) to match my current position will be through my management history, soft skills, and general understanding of people. My current position is very much a diplomat between the people getting the work done and the the people who want it done (then helping to get it done). Your post is very relatable even though I’m in a different industry. It gives me a little hope that some of my skills are transferrable even without a paper on the wall.
As you age, soft skills become way more important IMO. It’s almost impossible to keep up with the changing technology landscape, and while you could theoretically become an expert in some tech that never goes away (hello Cobol), eventually it will become obsolete and you’re left with no marketable skills.
And while some people are lifelong learners (I am), learning new programming languages over and over again gets old at some point. So transitioning into more of a people’s role (like management) it’s a good move when you get older.
And if AI keeps getting better at coding, some programming jobs could be in danger of automation, so it’s also a safety net for that scenario.
it’s the fucking humans
But coding seems quite literally like modeling those requirements.
Yes. But you have to know the requirements before you can pour them in code. You don’t start coding and are granted a vision by the god of logic about where your journey will lead you.
you don’t start coding and are granted a vision by the god of logic about where your journey will lead you.
What is Agile for $200?
From the article:
There are times when I’m writing software just for myself where I don’t realize some of the difficulties and challenges until I actually start writing code.
I get what you’re saying but regardless if you have them upfront or along the way coding is modeling those requirements as we best imagine or understand them…even accidentally when following practices learned from others we may not even realize what requirements our modeling has solved.
Sure, sometimes you find requirements that you didn’t think of beforehand.
But what is programming at the core? I’d summarize it like this: “Explaining how to solve a complex problem to a very fast idiot.” And the thing C-Suits like to forget is that this explanation is given in a specialised language that, at least ideally, leaves no room for interpretation. Because ultimately the computer doesn’t understand Python, Rust, C or even assembly. It understand opcodes given in binary. Assembly may be the closest human-readable approximation, but it still has to be translated for the computer to understand it.
So what happens when you “replace” programmers with neural networks? You replace a well-defined, precise language to use for your explanation (because you still have to explain to the fast idiot what you want it to do) with English or whatever natural language you train your network on. A language littered with ambiguities and vague syntax.
Were it a tool to drive nails into wood you would’ve replaced a nail gun with a particularly lumpy rock.I don’t see neural networks effectively replacing programmers any time soon.
Hmm. I agree with everything you’ve said, but disagree regarding the utility of AI.
Everything we’ve done since the patch cord days has been to create tools that make it easier to reason about our code. We’ve done little or nothing to address the problem of reasoning about requirements and specifications. The closest we’ve come is a kind of iterative development, testing, and user validation process.
I think that ChatGPT and its siblings and descendants are likely not the answer, but I think that it must be possible to create tools to help us reason about requirements and specifications before we start coding. Given the difficulty of processing natural language, I think that whatever those tools are will either be AI systems or draw heavily on AI concepts.
Or maybe not. Maybe it really does take a trained and creative human acting only in concert with others to implement desires.
I agree. A neural network that you can basically treat like a fellow programmer that is always free to help you would be amazing. Rubber duck debugging with an intelligent Ducky. But for it to be useful it has to be able to understand domain knowledge as well as understand and question the explanations you give it. I think that would at least be extremely close to general intelligence. And that I don’t see happening any time soon.
Yes, you might be right that the kind of system I’m thinking of is too close to general intelligence to be anything we’ll see anytime soon.
I still hold out hope that we can somehow make progress on the problem of gathering requirements that can be turned into specifications that make sense. As far as I can tell, the number of people who can do that job effectively are becoming an ever smaller fraction of what’s actually required to keep up with the demand to create new systems.
You’re clearly not yet a Sr. Developer. Everyone knows that you start getting apocalyptic software visions the day after promotion.
Source: It was revealed to me in a dream
Those tend to come from the product owner, though.
Look at me! I am the product owner now!
“Did I say ‘we want it to do this OR that?’ I meant we wanted it to do this ‘AND’ that!” 🤦♂️
It’s coming with names
I think he’s missed a potential benefit of AI.
He seems to be speaking mostly of greenfield development, the creation of something that has never been done before. My experience was always in the field of “computerizing” existing manual processes.
I agree with him regarding the difficulty of gathering requirements and creating specifications that can be turned into code. My experience working as a solo programmer for tiny businesses (max 20 employees) was that very few people can actually articulate what they want and most of those that can don’t actually know what they want. The tiny number of people left miss all the hacks that are already baked into their existing processes to deal with gaps, inconsistencies, and mutually contradictory rules. This must be even worse in greenfield development.
That is not saying anything negative. If it were any other way, then they would have had success hiring their nephew to do the work. :)
Where I think AI could useful during that phase of work is in helping detect those gaps, inconsistencies, and contradictory rules. This would clearly not be the AI that spits out a database schema or a bit of Python code, but would nonetheless be AI.
We have AI systems that are quite good at summarizing the written word and other AI systems that are quite good at logical analysis of properly structured statements. It strikes me that it should be possible to turn the customers’ system descriptions into something that can be checked for gaps, inconsistencies, and contradictions. Working iteratively, alone at the start, then with expert assistance, to develop something that can be passed on to the development team.
The earlier the flaws can be discovered and the more frequently that the customer is doing the discovery, the easier those flaws are to address. The most successful and most enjoyable of all my projects were those where I was being hired explicitly to help root out all those flaws in the semi-computerized system they had already constructed (often enough by a nephew!).
I’m not talking about waterfall development, where everything is written in stone before coding starts. Sticking with water flow metaphors, I’m talking about a design and development flow that has fewer eddies, fewer sets of dangerous rapids, and less backtracking to find a different channel.
I feel like AI would fall down even harder here. A lot of long running applications have “secret” rules in them that developers have as either tribal knowledge or they have to reas the code and see is the case. Will AI be sophisticated enough to read a massive repo probably dependent on several others and have a realistic understanding of the requirements inherent in that code system? Because that’s what we pay senior devs to be good at quickly figuring out. I find myself skeptical that AI will be able to do that in a trustworthy way with how it “hallucinates” now and doesn’t have the concept that it just doesn’t know sometimes. If a developer has to spend time checking the AI’s assertions about the rules, is that actually going to be faster than just keeping them in their mind or doing the research themselves?
I agree with most of what you said, but I think I was not clear in my presentation of the domain of operations. I was not speaking to the rewriting of an existing system, but if gathering requirements for a system that is intended to replace existing manual systems or to create systems for brand new tasks.
That is, there is no existing code to work with, or at least nothing that is fit for purpose. Thus, you are starting at the beginning, where people have no choice but to describe something they would like to have.
Your reference to hallucination leads me to think that you are limiting your concept of AI to the generative large language models. There are other AI systems that operate on different principles. I was not suggesting that a G-LLM was the right tool for the job, only that AI could be brought to bear in analyzing requirements and specifications.
I wasn’t talking about rewriting an existing system either. I’m talking about adding to a system. In order to do that effectively, you need to understand the system as it stands and consider how any requirement could clash or be impossible with the current set of requirements. This is why I bring up the AI needing to pull a set of requirements from the existing code. You cannot add requirements without knowing the requirements that already exist.
I think that hallucination is still a massive issue. I don’t even like to call it hallucination because what it really is bad guesses. We should never forget that all any AI does is guess. It doesn’t reason about anything or connect information together. AI will hold contradictory positions because of this.
Currently we have no way to make an AI declare that it just doesn’t know or even very often ask for more information in order to make a decision because the method of training an AI is literally guess and check.
For that reason, I don’t think that AI will ever be the tool for the job when it comes to any kind of requirements gathering. I mean I guess you could, but I always run the risk of being like that lawyer who had made up cases in this result. AI made things up because all it does it make its best guess and it doesn’t care I’d that guess is grounded in much of anything at all.
Ah, I understand now. Yes, I think that maybe I agree with you in general.
I still think that AI operated by ethical experts has much to offer when used not an automated replacement, but as a tool that can save time and help verify accuracy. I’m thinking in terms of a kind of teamwork where one member of the team is an AI system or assistant.
You’re right, the best part about AI is automating the annoying part of actually implementing what you want to code. Now you have more time to think about requirements and sped up the process to maybe get several iterations to really refine a product. However, ChatGPT is gonna stay as a helper function writer for the next few years I think
I think that ChatGPT is probably the wrong tool for what I’m imagining. I’m thinking more in terms of “hypothesis generators” and “theorem testers” that, as far as I know, are not using the methods of ChatGPT in their operation. I think that those kinds of tools and others like them could be used to help clarify requirements before coding even starts.
I think AI isn’t going to replace the upper level of programmers, but I do believe the absolute number of programmers will drop as AI completes more and more of the labor involved in coding.
A lot of entry level jobs just won’t exist anymore, because the AI will do the typing work while a small number of people manage the AI.
And this will apply to pretty much all white collar work - at least that’s my prediction.
I believe that besides blue collar jobs, AI will practically eradicate the middle class, and sadly there won’t be a UBI to pick up the slack. But maybe I’m just too damn cynical.
I think that’s assuming a relatively consistent level of scope, but the scope will just get bigger. How big is your feature backlog? Will you be able to easily get through it even with the help of AI? How big will your feature backlog get if implementation friction is lowered?
deleted by creator
But those people don’t need to be programmers.
The reality is, that most software is complex, but trivial. It’s s bunch of requirements, but there’s no magic behind it. An AI that can turn a written text containing the requirements into a decently running program will replace tons of developers.
And since a future AI, that’s actually trained to do software, won’t have problem juggling 300 requirements at once (like humans have), it’s relatively easy to trust the result.
it’s relatively easy to trust the result.
… just as easy as taking the responsibility for it if it fails?
Do human programmers not fail?
I don’t want to hype AI, but you’re basically comparing a high school graduate AI (lots of general knowledge, no specialization) with a perfect senior dev. But that’s not really fair.
As soon as an AI works better than the average developer in a given area, it will outperform them. Simple as that.
Of course it will make errors, but the question is, are the extra errors compared to a human worth the savings?
Just a quick example: let’s say you’d need 10 devs of 100k a year and they produce errors worth 200k a year. That means costs of 1.2million a years.
If an AI costs 100k in licenses, replaces 5 devs and only adds, say 200k in errors, you’re still at only 1 million a year.
Honestly all the claimed use cases of generative AI for coding are much more easily fulfilled with normal tools. You can’t perform mass refactorings with them because you need to manually check their output or prove that the code they’re generating is correct, they can’t generate code that well unless your domain is well documented online, which isnt the case for most companies.
There are places that generative AI will replace workers, especially in art, which makes it all the more important to ensure that whoever has their work used in training data is fairly compensated for their work is generating for the AI company. For programming however, I personally don’t see a ton of value in what exists today at least.
Removed by mod