Lots of layoffs (“re-evaluating our operational footprint”) and switching to “agentic” processes. Target user is AI.
Anyone still hosting Gitlab?
Lots of layoffs (“re-evaluating our operational footprint”) and switching to “agentic” processes. Target user is AI.
Anyone still hosting Gitlab?
“Software will be built by machines, directed by people.”
Oh my lord. Is this a delayed April Fools post?
This is dangerous for me to say on lemmy, but fuck it.
Doesn’t it make sense that machines would write for machines? Isn’t that kind of what we already do by creating compilation layers programmers use? We obviously wouldn’t write the manual 1’s and 0’s, and most people don’t write using assembly. Is this not a translation layer for us to be able to write code?
Right now we have LLMs writing with languages designed for humans, and it’s already doing some pretty wild stuff. If we get to the point where AI is literally a coding model (and not a generic LLM) that is able to use an AI optimized way of writing code, who knows what it would be capable of.
Code is one of the few things AI is specially suited for. AI is just a big fancy prediction machine, so what better application than something that is by definition formulaic and patternistic like code? I am not saying we are there now, but rather the idea that machines should write software does make sense when it becomes actually feasible.
If we could have programmed like this from the beginning, we would have. There has been many evolutions of making it easier to code. What’s easier than plain language?
I may be considering AI usage from a different angle. I’m less interested in the technical side than I am from the moral side.
AI companies trained their agents using open source software, did not contribute back to the code, did not credit the authors, and now want to sell it back to the same people they ripped off.
As an open source project maintainer, I’m disgusted by this.
I’m also a musician. AI companies trained agents on other people’s music without giving anything back to them. This also is disgusting.
AI trained on people’s work now lets you circumvent paying the original creators.
Add to this resource usage and environmental impact.
This is why I see AI usage as immoral. It hurts real people.
@Bazoogle @1hitsong First of all - when it comes to creating programs, you want the output to be deterministic. Stochastic program output is a serious problem, as you _will_ get unreproducible bugs. Second, plain language is _not_ easy except for the simplest of tasks. Actual programs need to handle all kinds of corner cases and hardware weirdness and human weirdness. Your “plain language” goes from “do a thing” very quickly to “do a thing. but not that thing. or that other thing. and and and…”
Your options would be write all those things in plain language, or program them all eith (hopefully) no mistakes, bugs, or vulnerabilities. Either way you have to catch all the situations. Even in plain language, not everybody will be able to effectively use AI to generate code. You need to have a solid understanding of software architecture to be able to get useful output.
AI is capable of writing deterministic programs.
I would also like to preemptively emphasize that AI is not there yet. I am simply talking about the concept of machines creating software. If you try to step back from your anti-AI gut reaction and truly think about it, it would make sense to do if we get there technologically
I don’t know why anyone should take your post seriously when you say that AI isn’t there yet. You’re saying, purely hypothetically, that AI could do these things, if it existed, which it doesn’t. That can’t be argued against because no matter what anyone points to, you can just say that isn’t it.
But, like, your basic premise that machines would be the best programmers of machines is inherently flawed because humans created those machines, and thus it should actually stand that humans would thus be the best programmers of those machines. But that’s a reductive argument that kinda is more tell than show.
Programming is really just some layer of abstraction on modifying how a computer works, so vibecoding should really be just another layer to that abstraction. But as it stands now (and how we have specifically created our current LLMs), these outputs are not deterministic, and thus sort of fail as a means to program with. That’s one of dozens of reasons of why it fails as a programming substitute.