fardinahsan a day ago

Isn't the sample super biased? StackOverflow is increasingly bleeding users to AI tooling. Shouldn't we expect the remaining users to be increasingly distrustful of AI?

I don't use StackOverflow at all anymore.

  • mutkach 20 hours ago

    > Respondents were recruited primarily through channels owned by Stack Overflow. The top sources of respondents were onsite messaging, blog posts, email/newsletter subscribers, banner ads, and social media posts. Since respondents were recruited in this way, highly-engaged users on Stack Overflow were more likely to notice the prompts to take the survey over the duration of the collection promotion. We also recruited respondents via a Reddit ad campaign, this accounted for < 2% of total responses.

  • absoluteunit1 21 hours ago

    That’s a good point - but I suspect that many people who don’t use StackOverflow still participate in the survey. It’s quite popular

  • add-sub-mul-div 19 hours ago

    They sell their data to OpenAI, so they're also profiting from AI. And in the near future developers will become more fully dependent on StackOverflow for whatever they can't get from AI, because self sufficiency will have atrophied.

rkozik1989 18 hours ago

One of the biggest problems I've run into AI tooling is they brute force a solution instead of subconsciously thinking about things that engineers subconsciously think about while crafting a solution, so even if it does manage to solve the problem you asked it solve it almost always isn't the best solution out there.

The reality of problems such as this one is you don't discover them until the product is in a user's hand for awhile, so while LLMs may be useful is speeding up the resolution of these problems they're kind of useless at discovering them. Yet despite this CEOs of companies with major AI investments are promising an idealized future within a couple of years as though it hasn't already been nearly 3 years of ChatGPT. Despite all the hype and promises the pace of the product development lifecycle seems to remain unchanged. I think realistically it'll be at least 10 years before LLM-based tools and agents can do what CEOs are promising.

rustyrustling 16 hours ago

After 2-3 weeks of going hard at Claude Max - yeah it's got major limits. At first I was trying to make full ready-to-go-to-prod entirely with vibe coding. Then I progressed to reviewing all the code it made. However, it will spend at least a third of it's tokens and my constantly ( 5 hr ) replenishing opus on spinning in a circle - checking random dirs, random files, and unrelated things for the answer to a problem that it THINKS is solution ( and that's not even considering the price increase ( by token reduction and 7 day waiting period lol ) that they will be doing in August.

Then I just stopped letting it run rampant and reviewed every step it took ( this is when I got the best results. ) So realistically - all it would be good at in a complex enterprise software stack ( My primary experience is with Guidewire ) it would be great for quickly scaffolding new parts of a micro service, adding onto pieces of one very specifically, and just removing the grunt work of manually hitting tab, cmd ->, ctrl ->, opt -> through big files and letting me just read it and confirm rather than getting carpal tunnel.

Honestly the best use I've gotten out of it has been updating and adding onto my emacs config primarily used for org-roam.

As far as replacing engineers? After at least 300 hours using max - I can say no it will not. I realized this after I spent more time configuring the rules and prompting the ai than actually just writing the code myself.

yakattak 21 hours ago

As expected we’re getting closer and closer to the Trough of Disillusionment. That’s not a bad thing, because it leads to the Plateau of Productivity.[1]

Anecdotally for myself I’m finding that LLMs are great when I can give it a hyper specific target like a function to write. This isn’t because it can’t write an entire script. It can. It’s because the more I let it run wild, it feels like my understanding of the code gets exponentially worse.

1: https://en.m.wikipedia.org/wiki/Gartner_hype_cycle

  • mutkach 20 hours ago

    Growing disillusionment among programmers (whose productivity gains, by the way, represent the most successful use case for AI yet) is indeed not necessarily a bad thing.

    What is concerning is that VCs seem to believe we are still in the exponential growth phase of the hype cycle. I believe the consensus among them (and the bigtech-adjacent shills) is that they are targeting a trillion-dollar market at minimum. Somehow.

    • jononor 10 hours ago

      Why is that concerning? The VCs job is to sell to the bigger fool. They are going to pump that narrative for years, so that they can make some exits. Regardless of underlying realism.

  • Nullabillity 18 hours ago

    Where's the NFT plateau of productivity? Asbestos? Lead?

    The Gartner model doesn't actually have anything to say about technology, it's just astrology for rich people.

hollownobody 21 hours ago

"AI tools are bad", says company most suffering by the use of AI tools