The article is talking about new grads generally, but I think there's an issue with AI that isn't talked about enough. It's not that it's taking away jobs [1], it's that it is taking away skills.
Even if you are the biggest critic of AI, it's hard to deny that the frontier models are quite good at the sort of stuff that you learn in school. Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
Once upon a time, I had to struggle through these. My code wouldn't run properly because I forgot to release a variable from memory or I was off-by-one on a recursive algorithm. But the struggling is what ultimately helped me actually learn the material [2]. If I could just type out "build a hash table in C" and then shuffle a few things around to make it look like my own, I'd have never really understood the underlying work.
At the same time, LLMs are often useful, but still fail quite frequently in real world work. I'm not trusting cursor to do a database migration in production unless I myself understand and check each line of code that it writes.
Now, as a hiring manager, what am I supposed to do with new grads?
[1] which I think it might be to some extent in some companies, by making existing engineers more productive, but that's a different point
[2] to the inevitable responses that say "well I actually learn things better now because the LLM explains it to me", that's great, but what's relevant here is that a large chunk of people learn by struggling
> Even if you are the biggest critic of AI, it's hard to deny that the frontier models are quite good at the sort of stuff that you learn in school. Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
I don't feel this is a strong argument, since these are the sort of things that one could easily lookup on stackoverflow, github, and so on for a while now. What "AI" did was being a more convenient code search tool + text manipulation abilities.
But you still need to know the fundamentals, otherwise won't even know what to ask. I recently used GPT to get a quick code sample for a linear programming solution, and it saved me time looking up the API for scipy... but I knew what to ask for in the first place. I doubt GPT would suggest that as a solution if I described the problem in too high level.
Don't forget there were lots of things that are in standard libraries now that didn't use to exist back when I was coding in C in the 1990's. Nobody really writes their own sorting algorithm anymore -- and nobody should write their own A* algorithm either.
Honestly though, I recently asked Claude 3.7 Sonnet to write a python script to upload ssh keys to a mikrotik router, to prompt for the username and password -- etc. And it did it. I wouldn't say I loved the code -- but it worked. Code was written in more of a golang format, but okay. It's fine and readable enough. Hiring a contractor from our usual sources would have taken a week at least, probably by the time you add up the back and forth with code reviews and bugs.
I think for a lot of entry level positions (particularly in devops automation say), AI can effectively replace them. You'd still have someone supervise their code and do code reviews, but now the human just supervises an AI. And that human + AI combo replaces 3 other humans.
Those are the sorts of things you're supposed to struggle with in school though.
If students are using AI now, that is indeed the same thing as looking up solutions on Stack Overflow or elsewhere. It's cheating for a grade at the expense of never developing your skills.
Stackoverflow could help answer specifically targeted questions about how thigns worked, or suggest areas of debugging. They couldn't/wouldn't take a direct question from an assignment and provide a fully working answer to it.
You still have to understand what's happening and why I think.
I remember going to a cloud meetup in the early days of AWS. Somebody said "you won't need DBAs because the database is hosted in the cloud." Well, no. You need somebody with a thorough understanding of SQL in general and your specific database stack to successfully scale. They might not have the title "DBA," but you need that knowledge and experience to do things like design a schema, write performant queries, and review a query plan to figure out why something is slow.
I'm starting to understand that you can use a LLM to both do things and teach you. I say that as somebody who definitely has learned by struggling, but realizes that struggling is not the most efficient way to learn.
If I want to keep up, I have to adapt, not just by learning how to use tools that are powered by LLMs, but by changing how I learn, how I work, and how I view my role.
I'm seeing something similar. LLMs have helped me tremendously, especially in tasks like translating from one language to another.
But I've also witnessed interns using them as a crutch. They can churn out code faster that I did at an equivalent stage in my career but they really struggle debugging. Often, it seems like they just throw up their hands and pivot to something else (language, task, model) instead of troubleshooting. It almost seems like they are being conditioned to think implementation should always be easy. I often wonder if this is just "old curmudgeons" attitude or if it belies something more systemic about the craft.
Calculators have been available forever, but have not eliminated math education. Even algebra systems that can solve equations, do integrals and derivations have been available forever, but people understand that if they don't learn how it actually works they are robbing themselves. By the same token, if you need to do this stuff professionally, you are relying on computers to do it for you.
> Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
You can look up any of these and find dozens of implementations to crib from if that's what you want.
Computers can now do more, but I'm not (yet) sure it's all that different.
I agree with you, but just to steelman the other side: how do you know when you are robbing yourself and when you are just being pragmatic?
When I change the spark plugs in my car, am I robbing myself if I'm not learning the intricacies of electrode design, materials science, combustion efficiency, etc.? Or am I just trying to be practical enough to get a needed job done?
To the OPs point, I think you are robbing yourself if the "black box" approach doesn't allow you to get the job done. In other words, in the edge cases alluded to, you may need to understand what's going on under the hood to implement it appropriately.
> how do you know when you are robbing yourself and when you are just being pragmatic?
I don't know why we're pretending that individuals have suddenly lost all agency and self-perception? It's pretty clear when you understand something or don't, and it's always been a choice of whether you dive deeper or copy some existing thing that you don't understand.
We know that if we cheat on our math homework, or copy from our friend, or copy something online, that's going to bite us. LLMs make getting an answer easier, but we've always had this option.
I don’t know why you are ignoring the concept of opportunity cost to make an argument.
Did you drive to work today? Did you learn everything about the tensile strength of nylon seatbelts before you buckled up? How about tarmacadem design before you exited your driveway? Or PID controls theory before you turned on the HVAC?
The point I’m making is that some people disagree about how much you need to know. Some people are ok with knowing just enough to get a job done because it makes them more productive overall. So the question still stands: How do you gauge when learning is enough? To my point above, I think it comes down to whether you can get the job done. Leaning beyond that may be admirable in your mind, but not exactly what you’re being paid for, and I think some experts would consider it a poor use of time/resources.
Do you care if your subordinate wrote a good report using a dictionary to “cheat” instead of memorizing a good vocabulary? Or that they referenced an industry standard for an equation instead of knowing it by heart? I know I don’t.
I said it’s up to the individual to make their own choices. I don’t know who or what you’re arguing against, but I don’t think I have much to do with it. Peace
I was simply asking you to put a finer framework on how individuals should decide. Like I said multiple times, IMO it comes down to what is needed to get the job done, but I’m open to other thoughts. Saying it’s up to the individual isn’t really saying much, other than a veiled version of “I don’t know but I feel the compulsion to comment anyway.”
I think you have a good point, but I think the paradigm shift here is that people are chasing careers and money using LLM tools in a way that wasn't possible with calculators, and enforced differently within well-paying, white collar engineering professions.
For example, there's actual liability (legal and financial) involved in building a bridge that subsequently falls apart - not so with small bits of code block. Similarly there's a level of academic rigor involved in the certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering.
I'm not really sure what problem you're trying to point out here. There are legal standards and liability for engineering, and if someone violates them using an LLM they are held just as liable as they would be had they done the work themselves.
But the same is true for code? You are held to the same standards as if you had written it yourself, whatever that may be. Frequently that is nothing.
>>
[2] to the inevitable responses that say "well I actually learn things better now because the LLM explains it to me", that's great, but what's relevant here is that a large chunk of people learn by struggling
<<
I'm using AI to explain things to me.
And I'm still struggling, I'm just struggling less.
It's the economy and outsourcing. Why is everyone hell bent to say AI is killing jobs? I think its a because its a great scapegoat to blame a machine rather than foreigners and shithead management. Its crazy too because not only are you out a job you wind up getting shittier products and services at unreasonable prices, a double whammy!
Do you have some data to show that outsourcing is the culprit? It seems just as easy to blame "foreigners" as it is to blame "AI", especially considering your blanket followup statement about those foreigners always making "shittier" products.
I don't have data, but I can tell you that covid taught my company how to 'work remotely' and having learned that lesson, they seem to have pivoted away from a 30 / 70 mix of direct hires and onshore h1-b contractors, and have heavily utilized 'near shore' folks in LATAM.
I would not be surprised at all if other companies have quietly done the same while touting 'the future of AI', because as a society we seem to grudgingly accept automation far more readily than we accept offshoring.
It's not obvious to me. I look around and nobody I know is outsourcing anything anymore than they were 5, 10, 20 years ago. Nor is it obvious that outsourced products are inherently shittier than something made domestically.
You can't just make a blanket statement about the entire economy and say "it's obvious". We live in a big world. Your perception is not my perception. That's why data is so important.
Because from personal experience I've seen loads of companies wind down and remove internal teams like internal QA in favor of outsourcing to other regions, for example. It's made my job extremely annoying because I can't just tap the shoulder of my nearby QA engineer and see what's up, I have to wait an entire day for them to get the next build and then deal with the wrong stuff they've reported.
That's great that it's obvious to you! To some of us, not so much. I'd love to hear more about what it is, specifically, that your eyes are seeing, so that I may possibly shift my perspective.
Thanks for the feedback, I suppose I can see how it reads that way, but I assure you it wasn't intended as such. I simply want to politely ask them to expand on their perspective.
Hmm, thanks, but I don't think I can read your post any other way. Sorry, it's hard to interpret the tone as unintended, however, maybe other adjectives could be argued instead of the two I chose.
Regarding my other comment - I'm not sure what you mean. I wasn't trying to politely ask that commenter to expand on what they said. I was criticizing their attitude and also their uncharitable use of the "show me the data" ultimatum.
What I did in my first post was try to respect the fact that that may be their view of things and validate their perspective, then kindly explain that I don't see it, and would be open to hearing more about what they see. I could perhaps see how the "Thanks! :)" unintentionally adds a layer of sarcasm, but it's kind of a bummer if that's the case.
Regarding your other comment, I know, and that's exactly my point. You had a concern about the way that the other post framed the user's opinion. My comment attempted to do exactly what you're asking for - approach the user with a positive attitude and ask for more data without vilifying them (eg, "[making] people sound like assholes").
Please, I'm begging you not to do this. HN is the only platform on the internet I still engage with because it's one of the few places where people generally act like people (whether the emotions are positive or negative), as opposed to the broader internet, where the smirking tone of provocation is always the first priority.
I've already said I've just been trying to be polite in my conversations. There's no "smirking tone of provocation" intended, and the smilies are intended to be taken at face value - genuine smiles, not sarcastic or superior smirks.
I've told you a few times now that I'm attempting to be polite as I converse in good faith. I don't know how else to make that point again, nor why you continue to insist I'm being provocative. I wished you a nice day because I could feel that we were already at that impasse.
You can ask for data without trying to make people sound like assholes for having an opinion based on observation, which would be an unrealistic blanket restriction for humans talking to each other.
IDK, I kind of agree with Mao insomuch as people should do a certain amount of research before spouting off on subjects they don't understand. Otherwise you've just got reams and reams of people waffling about things they actually, do not know about.
"Unless you have investigated a problem, you will be deprived of the right to speak on it. Isn't that too harsh? Not in the least. When you have not probed into a problem, into the present facts and its past history, and know nothing of its essentials, whatever you say about it will undoubtedly be nonsense. Talking nonsense solves no problems, as everyone knows, so why is it unjust to deprive you of the right to speak?"
Yeah, not surprising from the "power flows from the barrel of a gun" guy. So what it boils down to is "unless you agree with me, you will be deprived of the right to speak".
And that's the problem. "Do your research before spouting off" is good in principle. But it gets weaponized via "if you don't agree with me, you obviously haven't done your research", and used to silence anyone who disagrees.
We have to put up with uninformed opinions, in order to leave space for dissenting views.
> We have to put up with uninformed opinions, in order to leave space for dissenting views.
I think your comment is great at conflating "allowing criticism against the government and speech on a sidewalk" with "disallowing unconsidered speech in certain private spaces because of the danger that some kinds of information has". You're very clumsily conflating two discrete things here.
First, you assume that people spouting complete nonsense without investigation into something can be productive. I fundamentally disagree with this, based on experience. I have encountered people with what essentially were delusions of grandeur — they had very limited and grossly incorrect physics knowledge, with no mathematical foundation, and they stated that they saw falsities in Einsteininan Relativity. They then proceeded to argue that, were Einstein alive, they would be entitled to a debate with him, as would presumably everyone else with misgivings from ignorance. They had absolutely no understanding of what Relativity was, no understanding even of what an electron is, and the entire discussion of trying to explain anything to them was essentially a complete waste of time, as they had neither the mathematical basis, nor the understanding of physical law and experimentation, to understand the vast amounts of evidence in support of Einstein's theory of spacetime and relativity. Every single sentence out of their mouth was either mildly incorrect waffling from ignorance, or complete and utter nonsense.
I for one, do not think that this debate was productive, and they argued that me finishing on the note that they should, in fact, avail themselves of the knowledge of physics so that they can understand the tests that have been done for themselves, was an "argument to authority". From the outset, there was absolutely no possibility of meaningful debate, because they had chosen to remain ignorant despite the internet being flooded of places where they could learn even a modicum of basic knowledge. This is what it means to "do research" and "learn". I argue based on these experiences that you do, actually, owe people around you to learn about something before spouting inane garbage on the matter.
Secondly, I argue we can allow dissenting views without leaving room within society, for popular subjects of misconception, such as holocaust denial or the hundreds of right wing grifters trying to sell people on the idea that vaccination is evil, to be given platforms.
There are people who engage in debate, not truthfully, or honestly, to discuss ideas and learn, but instead to spread their misinformation, hoping to catch people in the crowd that they can profit off. I assert that this is one of the reasons why a myriad of right wing ideas are taking hold right now. Vaccine denialism, holocaust denial, transphobia, etc. are rife — because it comes down to scientific misinformation and profit. People who are already ignorant, finding that it is monetarily fruitful to spread this ignorance in the name either of ideological malfeasance (such as in the case of transphobia — one big example being the christofascist Heritage Foundation injecting millions of dollars into the UK) or just purely out of self-interest (Jordan Peterson being a notable name there).
And all of it is remedied through people who are misinformed of the scientific evidence "shutting the fuck up and learning". The root comes from people abusing platforms to spread their ignorance, and then people parroting that without research. I think that there is room for people to say whatever they want in a public space (in a shopping mall, on a sidewalk), but I do not believe that it is right nor in the best interests of society, for these people to be given room by universities, by public theaters, or by online platforms. These people feed off waffling in front of an audience, only way that we can beat this epidemic of literal bullshit is by denying them that audience, starving them of social oxygen and saying "no, learn more before you deserve a space to speak publicly about this".
It’s interesting, and completely in character, that for all the noise the GOP makes about America being for Americans(tm), they don’t care at all about American companies firing their American employees to exploit cheaper labor from outside the country.
Don't they? As ill advised as these tariff wars may be, their stated purpose is to bring jobs back to the United States. Lately all the complaints of the form "Product X will cost $Y if made in the USA" have been coming from the other side of the aisle.
They're only trying to bring back blue collar jobs (and doing a bad job of that), rather than all jobs affected by outsourcing. It's purely political, as with everything.
They are waving their hands wildly while blaming other countries aggressively for all our problems. I see nothing from them criticizing American CEOs for their own aggressive outsourcing strategies.
Public companies always follow incentives, if the CEO didn't do that he would get replaced with the first one that says he would. This is how capitalism works, if you want to change how they behave you need to change laws or incentives, with stuff like tariffs.
You’re missing the point and at the same time are making my point for me. Politicians can change the regulations that mold incentives. See their awful strategy around tariffs. Neoliberalism makes outsourcing a more viable strategy.
> Why is everyone hell bent to say AI is killing jobs? I think its a because its a great scapegoat to blame a machine rather than foreigners and shithead management.
Why is everyone hell bent on blaming foreigners, rather than the management that actually makes these decisions, and domestic elected officials who actually are responsible for and make decisions that affect the economy.
Historically, I'm not aware of any examples of successful tariffs on services. Can it be done? Maybe. But what's to stop consulting shell companies from mixing in foreign labor with domestic labor?
>I mean it: Find a way to compete. You're good enough; you don't need government protection. You're brilliant, fast moving, hard working.
How do you compete when Corporations have a silent policy that disqualifies you? Can an American Citizen/Grad apply for an H1B? Corporations have gamed the system at the expense of Americans and graduates. They hire cheap slave labor that has now even become too expensive for them, hence the effort to open offshore Dev sweatshops.
I feel like the more obvious explanation is that we live in a time of economic uncertainty (cough tarrifs) so companies are cutting back hiring until things become more certain.
The job market crash didn't correlate at all with WFH, it correlated with the end of WFH and (more importantly) exactly lined up with the end of ZIRP.
Money was free, so a lot of people were paid out of thin air. When money stopped being free salaries actually had to come from somewhere, and there weren't enough somewheres to go around.
This is the correct answer. Tariffs are not why the job market is so awful. Maybe that will be true in the future, but the past two years of horrible terrible miserable state of the job market is not because of tariffs imposed a month ago.
I'm sure I'm not the only one who remembers all those posts on hn 2-3 years ago about how bad the job market is, right? It has only become worse.
I know a kid who interned at a job last summer. Graduated, applied to a full-time job at the company. He happened to know someone in HR who told him "we got over a thousand applications for this job req in one day."
How tariffs can be blamed for that kind of situation, which is happening all over the US and has been for literal _years_, defies logic.
At least in Europe I feel there has been general slowdown for a while and this was before tariffs. If there is uncertainty and maybe even drop in revenue companies really start considering if hiring makes sense in the moment unless it is absolutely mandatory.
That would seem to at least have a major impact that needs to be accounted for. A couple more I'd add:
First, the Trump administration's economic impact is much more than tariffs - which are highly significant - but unprecedented interference in the free market and private business; destruction of the regulatory and other institutions that a stable economy depends on (e.g., democracy, the courts, the rule of law, bank regulation); disrupting the stability of international relations, possibly leading to highly destructive things like wars.
Also, the recent trend of business to switch from (broadly speaking) innovation and productivity to rent-seeking, epitomized by private equity: cut workforces, other investment, and product to the bone and squeeze as much as possible out of the asset - leaving a dead, dry carcass.
The biggest thing with trump is nobody knows what he is going to do next. Businesses more than anything need to be able to make long term plans. Stability is important and that is out the window.
I would rather have somebody with life experience than somebody with an education right now. It's just like how we complain that the AI leaderboards are not representative of real AI skill, it's the exact same for academic benchmarks of whatever institution minted you a diploma. I don't need an overfitted worker just like i don't need an overfitted AI agent.
That’s just one article, but there are plenty more. Do a basic search for "not hiring Gen Z" or something similar and you’ll find tons of examples. It’s easier for people to believe AI is to blame rather than take the answer straight from the hiring managers mouths.
They don’t want to hire Gen Z because they see them as more hassle than they’re worth. I’m not saying whether that’s true or not, but that’s how a lot of managers and business owners feel right now, and you don’t have to look very hard to figure that out.
AI isn't replacing many jobs yet. But it is causing customers to hold off on certain purchases, like dev services, due to uncertainty. And the shills are amplifying the effect. I'm seeing layoffs caused by this.
Yeah I can only think of 3 ways AI is causing unemployment-
1) Hype, as you said, leading to delayed hiring.
2) Offshore workers using AI becoming more competitive.
3) Reallocation of capital to AI projects, which skews demand towards data scientists at the expense of, say, front-end devs (who ironically might have provided better return).
None of these are actually reducing the amount of human workers needed, but the social and economic impact is real.
Been chewing on versions of this debate for years - blame keeps shifting but not much really changes. Honestly, everyone points fingers but the fixes always seem out of reach. Kinda sucks, but that's how it goes.
Sure we have seen fantastic gains in AI capability, but until we have more AI tech diffusion into products, industrial processes, etc., AI’s effect on the economy may be much smaller that what people anticipate.
Every one of these posts wants to blame AI because that's the vogue explanation, but every time we see a shift like this there are better explanations.
The wave of tech layoffs a few years ago were blamed on AI but were so obviously attributable to interest rates and changing tax policies that the idea that the proto-AI tools we had at the time were responsible was laughable.
This shift we at least have better AI tools to blame (though I'd argue they're still not good enough), but we also have a US President who has straight up said that he wants to cause a global recession and has been doing everything in his power to make that happen.
Given that the impact of someone running a bulldozer through the economy like this is well-studied and that we'd predict exactly what were seeing here, attributing the damage to AI is silly verging on responsible. Place the blame where it belongs!
"Journalism" is now often just a euphemism for shock porn and clickbait.
At the end of this Atlantic article, the author admits:
> Luckily for humans, though, skepticism of the strong interpretation is warranted. For one thing, supercharged productivity growth, which an intelligence explosion would likely produce, is hard to find in the data. For another, a New York Fed survey of firms released last year found that AI was having a negligible effect on hiring.
In other words: did we scare ya? Good, because it got you to read this far. Nothing to actually see here.
There has been a longterm negative trend for recent grad employment.
Even those who do get employed, they tend to be underemployed with low wages.
The old excuse was 'automation' was killing jobs.
The lesser old excuse was offshoring.
Now it's AI?
How about we stop inventing excuses and perhaps look at root cause of the 'recent grad' factor. That perhaps requiring university degrees that arent worth anything for jobs that dont need them is the problem?
perhaps requiring university degrees that arent worth anything for jobs that dont need them is the problem?
I don't know?
Kind of sounds like the problem is more fundamental than that. It sounds like the job is not actually there in the first place. Doesn't matter how qualified you are if there's no money to pay you.
It's tempting to connect "AI is kind of like having an always-awake intern" with "nobody is hiring interns" but I'm skeptical. I think this is more about the decline of the corporate enterprise model in general. Exponential growth can't continue forever. Management is trying to force the curve to be exponential by no longer hiring anyone who isn't clearly going to improve profits in the short term.
Corporate models seek to grow exponentially, with hit and miss results. Not sure where there are any signs of that stopping.
More widely:
Businesses, economies, and natural ecosystems, are all full of both exponential drivers and limitations. Since the first life form.
It isn’t a model that is going away. Or that can go away. Unless there are no new opportunities for new things, which seems unlikely anytime soon, there will always be new pockets of exponential growth.
> the decline of the corporate enterprise model in general. Exponential growth can't continue forever.
People have been saying things like that probably since the creating of the corporate model. (Exponential is too much, but I'll take that as exaggeration to make the point.)
The article is talking about new grads generally, but I think there's an issue with AI that isn't talked about enough. It's not that it's taking away jobs [1], it's that it is taking away skills.
Even if you are the biggest critic of AI, it's hard to deny that the frontier models are quite good at the sort of stuff that you learn in school. Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
Once upon a time, I had to struggle through these. My code wouldn't run properly because I forgot to release a variable from memory or I was off-by-one on a recursive algorithm. But the struggling is what ultimately helped me actually learn the material [2]. If I could just type out "build a hash table in C" and then shuffle a few things around to make it look like my own, I'd have never really understood the underlying work.
At the same time, LLMs are often useful, but still fail quite frequently in real world work. I'm not trusting cursor to do a database migration in production unless I myself understand and check each line of code that it writes.
Now, as a hiring manager, what am I supposed to do with new grads?
[1] which I think it might be to some extent in some companies, by making existing engineers more productive, but that's a different point
[2] to the inevitable responses that say "well I actually learn things better now because the LLM explains it to me", that's great, but what's relevant here is that a large chunk of people learn by struggling
> Even if you are the biggest critic of AI, it's hard to deny that the frontier models are quite good at the sort of stuff that you learn in school. Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
I don't feel this is a strong argument, since these are the sort of things that one could easily lookup on stackoverflow, github, and so on for a while now. What "AI" did was being a more convenient code search tool + text manipulation abilities.
But you still need to know the fundamentals, otherwise won't even know what to ask. I recently used GPT to get a quick code sample for a linear programming solution, and it saved me time looking up the API for scipy... but I knew what to ask for in the first place. I doubt GPT would suggest that as a solution if I described the problem in too high level.
Don't forget there were lots of things that are in standard libraries now that didn't use to exist back when I was coding in C in the 1990's. Nobody really writes their own sorting algorithm anymore -- and nobody should write their own A* algorithm either.
Honestly though, I recently asked Claude 3.7 Sonnet to write a python script to upload ssh keys to a mikrotik router, to prompt for the username and password -- etc. And it did it. I wouldn't say I loved the code -- but it worked. Code was written in more of a golang format, but okay. It's fine and readable enough. Hiring a contractor from our usual sources would have taken a week at least, probably by the time you add up the back and forth with code reviews and bugs.
I think for a lot of entry level positions (particularly in devops automation say), AI can effectively replace them. You'd still have someone supervise their code and do code reviews, but now the human just supervises an AI. And that human + AI combo replaces 3 other humans.
Those are the sorts of things you're supposed to struggle with in school though.
If students are using AI now, that is indeed the same thing as looking up solutions on Stack Overflow or elsewhere. It's cheating for a grade at the expense of never developing your skills.
Stackoverflow could help answer specifically targeted questions about how thigns worked, or suggest areas of debugging. They couldn't/wouldn't take a direct question from an assignment and provide a fully working answer to it.
You still have to understand what's happening and why I think.
I remember going to a cloud meetup in the early days of AWS. Somebody said "you won't need DBAs because the database is hosted in the cloud." Well, no. You need somebody with a thorough understanding of SQL in general and your specific database stack to successfully scale. They might not have the title "DBA," but you need that knowledge and experience to do things like design a schema, write performant queries, and review a query plan to figure out why something is slow.
I'm starting to understand that you can use a LLM to both do things and teach you. I say that as somebody who definitely has learned by struggling, but realizes that struggling is not the most efficient way to learn.
If I want to keep up, I have to adapt, not just by learning how to use tools that are powered by LLMs, but by changing how I learn, how I work, and how I view my role.
I'm seeing something similar. LLMs have helped me tremendously, especially in tasks like translating from one language to another.
But I've also witnessed interns using them as a crutch. They can churn out code faster that I did at an equivalent stage in my career but they really struggle debugging. Often, it seems like they just throw up their hands and pivot to something else (language, task, model) instead of troubleshooting. It almost seems like they are being conditioned to think implementation should always be easy. I often wonder if this is just "old curmudgeons" attitude or if it belies something more systemic about the craft.
Calculators have been available forever, but have not eliminated math education. Even algebra systems that can solve equations, do integrals and derivations have been available forever, but people understand that if they don't learn how it actually works they are robbing themselves. By the same token, if you need to do this stuff professionally, you are relying on computers to do it for you.
> Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
You can look up any of these and find dozens of implementations to crib from if that's what you want.
Computers can now do more, but I'm not (yet) sure it's all that different.
I agree with you, but just to steelman the other side: how do you know when you are robbing yourself and when you are just being pragmatic?
When I change the spark plugs in my car, am I robbing myself if I'm not learning the intricacies of electrode design, materials science, combustion efficiency, etc.? Or am I just trying to be practical enough to get a needed job done?
To the OPs point, I think you are robbing yourself if the "black box" approach doesn't allow you to get the job done. In other words, in the edge cases alluded to, you may need to understand what's going on under the hood to implement it appropriately.
> how do you know when you are robbing yourself and when you are just being pragmatic?
I don't know why we're pretending that individuals have suddenly lost all agency and self-perception? It's pretty clear when you understand something or don't, and it's always been a choice of whether you dive deeper or copy some existing thing that you don't understand.
We know that if we cheat on our math homework, or copy from our friend, or copy something online, that's going to bite us. LLMs make getting an answer easier, but we've always had this option.
I don’t know why you are ignoring the concept of opportunity cost to make an argument.
Did you drive to work today? Did you learn everything about the tensile strength of nylon seatbelts before you buckled up? How about tarmacadem design before you exited your driveway? Or PID controls theory before you turned on the HVAC?
The point I’m making is that some people disagree about how much you need to know. Some people are ok with knowing just enough to get a job done because it makes them more productive overall. So the question still stands: How do you gauge when learning is enough? To my point above, I think it comes down to whether you can get the job done. Leaning beyond that may be admirable in your mind, but not exactly what you’re being paid for, and I think some experts would consider it a poor use of time/resources.
Do you care if your subordinate wrote a good report using a dictionary to “cheat” instead of memorizing a good vocabulary? Or that they referenced an industry standard for an equation instead of knowing it by heart? I know I don’t.
I said it’s up to the individual to make their own choices. I don’t know who or what you’re arguing against, but I don’t think I have much to do with it. Peace
I was simply asking you to put a finer framework on how individuals should decide. Like I said multiple times, IMO it comes down to what is needed to get the job done, but I’m open to other thoughts. Saying it’s up to the individual isn’t really saying much, other than a veiled version of “I don’t know but I feel the compulsion to comment anyway.”
> I was simply asking you to put a finer framework on how individuals should decide
I'm not in the business of prescribing philosophies on how others should live their lives?
But surely that doesn’t preclude you from describing how you live your own?
I think you have a good point, but I think the paradigm shift here is that people are chasing careers and money using LLM tools in a way that wasn't possible with calculators, and enforced differently within well-paying, white collar engineering professions.
For example, there's actual liability (legal and financial) involved in building a bridge that subsequently falls apart - not so with small bits of code block. Similarly there's a level of academic rigor involved in the certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering.
I'm not really sure what problem you're trying to point out here. There are legal standards and liability for engineering, and if someone violates them using an LLM they are held just as liable as they would be had they done the work themselves.
But the same is true for code? You are held to the same standards as if you had written it yourself, whatever that may be. Frequently that is nothing.
What change do you want to see here?
There are still schools where you can learn to shoe a horse...
[flagged]
>> [2] to the inevitable responses that say "well I actually learn things better now because the LLM explains it to me", that's great, but what's relevant here is that a large chunk of people learn by struggling <<
I'm using AI to explain things to me.
And I'm still struggling, I'm just struggling less.
That's progress I think.
It's the economy and outsourcing. Why is everyone hell bent to say AI is killing jobs? I think its a because its a great scapegoat to blame a machine rather than foreigners and shithead management. Its crazy too because not only are you out a job you wind up getting shittier products and services at unreasonable prices, a double whammy!
Do you have some data to show that outsourcing is the culprit? It seems just as easy to blame "foreigners" as it is to blame "AI", especially considering your blanket followup statement about those foreigners always making "shittier" products.
Here's some backup to that claim [1], though, offshoring is only part of it.
In reality, it's likely several factors:
- Offshoring/outsourcing
- Businesses running out of ZIRP cash/returns
- Software replacing a lot of "paper" jobs (AI being a sliver of this)
- Older people needing and not vacating jobs like past generations
- Higher CoL expenses meaning lower-paying jobs that would/could be occupied by a recent graduate aren't.
- General market sentiment being cautious/conservative due to the whiplash of the last 17 years.
As with most things, it's not one convenient scapegoat, it's many problems compounding into one big clusterf*ck.
[1] https://archive.ph/8Zda3
Add to that section 174 of the tax code. This directly impacts software companies and profitability.
I don't have data, but I can tell you that covid taught my company how to 'work remotely' and having learned that lesson, they seem to have pivoted away from a 30 / 70 mix of direct hires and onshore h1-b contractors, and have heavily utilized 'near shore' folks in LATAM.
I would not be surprised at all if other companies have quietly done the same while touting 'the future of AI', because as a society we seem to grudgingly accept automation far more readily than we accept offshoring.
It’s obvious, you just have to look around. Please don’t tell us not to believe our eyes because we don’t have a double blind study.
It's not obvious to me. I look around and nobody I know is outsourcing anything anymore than they were 5, 10, 20 years ago. Nor is it obvious that outsourced products are inherently shittier than something made domestically.
You can't just make a blanket statement about the entire economy and say "it's obvious". We live in a big world. Your perception is not my perception. That's why data is so important.
Most mid sized companies I’ve worked with are nearshoring almost everything or in the process of doing so
It doesn’t bite me as much due to seniority but it’s still happening
Tbh if I was younger I’d just try to relocate myself seems fun
What exactly is nearshoring? Canada/Mexico? SLatAM?
What is "mid-sized" ?
There's so much vague descriptors in your post as to be almost entirely meaningless.
Do you work in the tech industry? In what part?
Because from personal experience I've seen loads of companies wind down and remove internal teams like internal QA in favor of outsourcing to other regions, for example. It's made my job extremely annoying because I can't just tap the shoulder of my nearby QA engineer and see what's up, I have to wait an entire day for them to get the next build and then deal with the wrong stuff they've reported.
That's great that it's obvious to you! To some of us, not so much. I'd love to hear more about what it is, specifically, that your eyes are seeing, so that I may possibly shift my perspective.
Thanks! :)
[flagged]
Thanks for the feedback, I suppose I can see how it reads that way, but I assure you it wasn't intended as such. I simply want to politely ask them to expand on their perspective.
Kinda like what you just asked others to do.[1]
[1]https://news.ycombinator.com/item?id=43859330
Hmm, thanks, but I don't think I can read your post any other way. Sorry, it's hard to interpret the tone as unintended, however, maybe other adjectives could be argued instead of the two I chose.
Regarding my other comment - I'm not sure what you mean. I wasn't trying to politely ask that commenter to expand on what they said. I was criticizing their attitude and also their uncharitable use of the "show me the data" ultimatum.
What I did in my first post was try to respect the fact that that may be their view of things and validate their perspective, then kindly explain that I don't see it, and would be open to hearing more about what they see. I could perhaps see how the "Thanks! :)" unintentionally adds a layer of sarcasm, but it's kind of a bummer if that's the case.
Regarding your other comment, I know, and that's exactly my point. You had a concern about the way that the other post framed the user's opinion. My comment attempted to do exactly what you're asking for - approach the user with a positive attitude and ask for more data without vilifying them (eg, "[making] people sound like assholes").
Ah, well. Enjoy your day. :)
Please, I'm begging you not to do this. HN is the only platform on the internet I still engage with because it's one of the few places where people generally act like people (whether the emotions are positive or negative), as opposed to the broader internet, where the smirking tone of provocation is always the first priority.
I've already said I've just been trying to be polite in my conversations. There's no "smirking tone of provocation" intended, and the smilies are intended to be taken at face value - genuine smiles, not sarcastic or superior smirks.
I've told you a few times now that I'm attempting to be polite as I converse in good faith. I don't know how else to make that point again, nor why you continue to insist I'm being provocative. I wished you a nice day because I could feel that we were already at that impasse.
You can ask for data without trying to make people sound like assholes for having an opinion based on observation, which would be an unrealistic blanket restriction for humans talking to each other.
Did my comment make him out to be an asshole? That wasn't the intent.
I don't see any indication an observation drove the opinion.
IDK, I kind of agree with Mao insomuch as people should do a certain amount of research before spouting off on subjects they don't understand. Otherwise you've just got reams and reams of people waffling about things they actually, do not know about.
"Unless you have investigated a problem, you will be deprived of the right to speak on it. Isn't that too harsh? Not in the least. When you have not probed into a problem, into the present facts and its past history, and know nothing of its essentials, whatever you say about it will undoubtedly be nonsense. Talking nonsense solves no problems, as everyone knows, so why is it unjust to deprive you of the right to speak?"
https://www.marxists.org/reference/archive/mao/selected-work...
The only caveat here is that Mao didn't follow his own advice, lol
Yeah, not surprising from the "power flows from the barrel of a gun" guy. So what it boils down to is "unless you agree with me, you will be deprived of the right to speak".
And that's the problem. "Do your research before spouting off" is good in principle. But it gets weaponized via "if you don't agree with me, you obviously haven't done your research", and used to silence anyone who disagrees.
We have to put up with uninformed opinions, in order to leave space for dissenting views.
> We have to put up with uninformed opinions, in order to leave space for dissenting views.
I think your comment is great at conflating "allowing criticism against the government and speech on a sidewalk" with "disallowing unconsidered speech in certain private spaces because of the danger that some kinds of information has". You're very clumsily conflating two discrete things here.
First, you assume that people spouting complete nonsense without investigation into something can be productive. I fundamentally disagree with this, based on experience. I have encountered people with what essentially were delusions of grandeur — they had very limited and grossly incorrect physics knowledge, with no mathematical foundation, and they stated that they saw falsities in Einsteininan Relativity. They then proceeded to argue that, were Einstein alive, they would be entitled to a debate with him, as would presumably everyone else with misgivings from ignorance. They had absolutely no understanding of what Relativity was, no understanding even of what an electron is, and the entire discussion of trying to explain anything to them was essentially a complete waste of time, as they had neither the mathematical basis, nor the understanding of physical law and experimentation, to understand the vast amounts of evidence in support of Einstein's theory of spacetime and relativity. Every single sentence out of their mouth was either mildly incorrect waffling from ignorance, or complete and utter nonsense.
I for one, do not think that this debate was productive, and they argued that me finishing on the note that they should, in fact, avail themselves of the knowledge of physics so that they can understand the tests that have been done for themselves, was an "argument to authority". From the outset, there was absolutely no possibility of meaningful debate, because they had chosen to remain ignorant despite the internet being flooded of places where they could learn even a modicum of basic knowledge. This is what it means to "do research" and "learn". I argue based on these experiences that you do, actually, owe people around you to learn about something before spouting inane garbage on the matter.
Secondly, I argue we can allow dissenting views without leaving room within society, for popular subjects of misconception, such as holocaust denial or the hundreds of right wing grifters trying to sell people on the idea that vaccination is evil, to be given platforms.
There are people who engage in debate, not truthfully, or honestly, to discuss ideas and learn, but instead to spread their misinformation, hoping to catch people in the crowd that they can profit off. I assert that this is one of the reasons why a myriad of right wing ideas are taking hold right now. Vaccine denialism, holocaust denial, transphobia, etc. are rife — because it comes down to scientific misinformation and profit. People who are already ignorant, finding that it is monetarily fruitful to spread this ignorance in the name either of ideological malfeasance (such as in the case of transphobia — one big example being the christofascist Heritage Foundation injecting millions of dollars into the UK) or just purely out of self-interest (Jordan Peterson being a notable name there).
And all of it is remedied through people who are misinformed of the scientific evidence "shutting the fuck up and learning". The root comes from people abusing platforms to spread their ignorance, and then people parroting that without research. I think that there is room for people to say whatever they want in a public space (in a shopping mall, on a sidewalk), but I do not believe that it is right nor in the best interests of society, for these people to be given room by universities, by public theaters, or by online platforms. These people feed off waffling in front of an audience, only way that we can beat this epidemic of literal bullshit is by denying them that audience, starving them of social oxygen and saying "no, learn more before you deserve a space to speak publicly about this".
But in the end, it's unrealistic. It's taking a true principle and overapplying it as a form of rhetoric.
It’s interesting, and completely in character, that for all the noise the GOP makes about America being for Americans(tm), they don’t care at all about American companies firing their American employees to exploit cheaper labor from outside the country.
Don't they? As ill advised as these tariff wars may be, their stated purpose is to bring jobs back to the United States. Lately all the complaints of the form "Product X will cost $Y if made in the USA" have been coming from the other side of the aisle.
They're only trying to bring back blue collar jobs (and doing a bad job of that), rather than all jobs affected by outsourcing. It's purely political, as with everything.
They are waving their hands wildly while blaming other countries aggressively for all our problems. I see nothing from them criticizing American CEOs for their own aggressive outsourcing strategies.
Public companies always follow incentives, if the CEO didn't do that he would get replaced with the first one that says he would. This is how capitalism works, if you want to change how they behave you need to change laws or incentives, with stuff like tariffs.
You’re missing the point and at the same time are making my point for me. Politicians can change the regulations that mold incentives. See their awful strategy around tariffs. Neoliberalism makes outsourcing a more viable strategy.
> Why is everyone hell bent to say AI is killing jobs? I think its a because its a great scapegoat to blame a machine rather than foreigners and shithead management.
Why is everyone hell bent on blaming foreigners, rather than the management that actually makes these decisions, and domestic elected officials who actually are responsible for and make decisions that affect the economy.
The parent blames management.
"foreigners and shithead management."
Isn’t a new wave of outsourcing is a result of high interest rates?
[flagged]
Historically, I'm not aware of any examples of successful tariffs on services. Can it be done? Maybe. But what's to stop consulting shell companies from mixing in foreign labor with domestic labor?
[flagged]
>I mean it: Find a way to compete. You're good enough; you don't need government protection. You're brilliant, fast moving, hard working.
How do you compete when Corporations have a silent policy that disqualifies you? Can an American Citizen/Grad apply for an H1B? Corporations have gamed the system at the expense of Americans and graduates. They hire cheap slave labor that has now even become too expensive for them, hence the effort to open offshore Dev sweatshops.
> slave labor
lol
The labor market is a sellers market and has been for years. Plenty of people work in IT. Find something you are better at; make yourself better.
I feel like the more obvious explanation is that we live in a time of economic uncertainty (cough tarrifs) so companies are cutting back hiring until things become more certain.
Job market got issues way before tariffs, somewhere in the pandemic times.
I believe it's WFH. It taught companies remote work, and it's a small next step to offshore work.
The job market crash didn't correlate at all with WFH, it correlated with the end of WFH and (more importantly) exactly lined up with the end of ZIRP.
Money was free, so a lot of people were paid out of thin air. When money stopped being free salaries actually had to come from somewhere, and there weren't enough somewheres to go around.
This is the correct answer. Tariffs are not why the job market is so awful. Maybe that will be true in the future, but the past two years of horrible terrible miserable state of the job market is not because of tariffs imposed a month ago.
I'm sure I'm not the only one who remembers all those posts on hn 2-3 years ago about how bad the job market is, right? It has only become worse.
I know a kid who interned at a job last summer. Graduated, applied to a full-time job at the company. He happened to know someone in HR who told him "we got over a thousand applications for this job req in one day."
How tariffs can be blamed for that kind of situation, which is happening all over the US and has been for literal _years_, defies logic.
Offshore is nothing new. Has been tried with multiple degrees of failure for decades.
Hell, my first job decades ago was as cheap labor in an IT project offshored from the US.
Maybe - also to all the below comments - it's "overdetermined", ie., an "all of the above" situation, with AI some part of that mix.-
At least in Europe I feel there has been general slowdown for a while and this was before tariffs. If there is uncertainty and maybe even drop in revenue companies really start considering if hiring makes sense in the moment unless it is absolutely mandatory.
Was happening last year too for SWE.
That would seem to at least have a major impact that needs to be accounted for. A couple more I'd add:
First, the Trump administration's economic impact is much more than tariffs - which are highly significant - but unprecedented interference in the free market and private business; destruction of the regulatory and other institutions that a stable economy depends on (e.g., democracy, the courts, the rule of law, bank regulation); disrupting the stability of international relations, possibly leading to highly destructive things like wars.
Also, the recent trend of business to switch from (broadly speaking) innovation and productivity to rent-seeking, epitomized by private equity: cut workforces, other investment, and product to the bone and squeeze as much as possible out of the asset - leaving a dead, dry carcass.
The biggest thing with trump is nobody knows what he is going to do next. Businesses more than anything need to be able to make long term plans. Stability is important and that is out the window.
I would rather have somebody with life experience than somebody with an education right now. It's just like how we complain that the AI leaderboards are not representative of real AI skill, it's the exact same for academic benchmarks of whatever institution minted you a diploma. I don't need an overfitted worker just like i don't need an overfitted AI agent.
I don’t think it’s any of the reasons listed in the article or the ones mentioned here either.
I’ll be blunt, people don’t want to hire Gen Z because of bad past experiences with that generation.
When 1 in 6 companies are "hesitant to hire Gen Z workers," then yeah, obviously unemployment is going to be higher for them. https://finance.yahoo.com/news/1-6-us-companies-reluctant-10...
That’s just one article, but there are plenty more. Do a basic search for "not hiring Gen Z" or something similar and you’ll find tons of examples. It’s easier for people to believe AI is to blame rather than take the answer straight from the hiring managers mouths.
They don’t want to hire Gen Z because they see them as more hassle than they’re worth. I’m not saying whether that’s true or not, but that’s how a lot of managers and business owners feel right now, and you don’t have to look very hard to figure that out.
AI isn't replacing many jobs yet. But it is causing customers to hold off on certain purchases, like dev services, due to uncertainty. And the shills are amplifying the effect. I'm seeing layoffs caused by this.
Yeah I can only think of 3 ways AI is causing unemployment-
1) Hype, as you said, leading to delayed hiring.
2) Offshore workers using AI becoming more competitive.
3) Reallocation of capital to AI projects, which skews demand towards data scientists at the expense of, say, front-end devs (who ironically might have provided better return).
None of these are actually reducing the amount of human workers needed, but the social and economic impact is real.
Been chewing on versions of this debate for years - blame keeps shifting but not much really changes. Honestly, everyone points fingers but the fixes always seem out of reach. Kinda sucks, but that's how it goes.
Sure we have seen fantastic gains in AI capability, but until we have more AI tech diffusion into products, industrial processes, etc., AI’s effect on the economy may be much smaller that what people anticipate.
Isn't the simplest explanation that the economy is slowing and new college grads generally tend to have trouble getting hired during recessions?
> it's that it is taking away skills.
And for some it is giving away skills.
a give and take tool on a need to know basis
"Even newly minted M.B.A.s from elite programs are struggling to find work."
I was worried at first but this is an elite journalism product for elites who are facing economic insecurity and not AI
http://archive.today/AUizz
Isn't outsourcing has been happening for decades?
[dead]
Every one of these posts wants to blame AI because that's the vogue explanation, but every time we see a shift like this there are better explanations.
The wave of tech layoffs a few years ago were blamed on AI but were so obviously attributable to interest rates and changing tax policies that the idea that the proto-AI tools we had at the time were responsible was laughable.
This shift we at least have better AI tools to blame (though I'd argue they're still not good enough), but we also have a US President who has straight up said that he wants to cause a global recession and has been doing everything in his power to make that happen.
Given that the impact of someone running a bulldozer through the economy like this is well-studied and that we'd predict exactly what were seeing here, attributing the damage to AI is silly verging on responsible. Place the blame where it belongs!
"Journalism" is now often just a euphemism for shock porn and clickbait.
At the end of this Atlantic article, the author admits:
> Luckily for humans, though, skepticism of the strong interpretation is warranted. For one thing, supercharged productivity growth, which an intelligence explosion would likely produce, is hard to find in the data. For another, a New York Fed survey of firms released last year found that AI was having a negligible effect on hiring.
In other words: did we scare ya? Good, because it got you to read this far. Nothing to actually see here.
AI are lying machines, and also enable the lying that is convenient to executive management?
AI is only killing jobs because people are killing jobs and blaming it on AI.
There has been a longterm negative trend for recent grad employment.
Even those who do get employed, they tend to be underemployed with low wages.
The old excuse was 'automation' was killing jobs.
The lesser old excuse was offshoring.
Now it's AI?
How about we stop inventing excuses and perhaps look at root cause of the 'recent grad' factor. That perhaps requiring university degrees that arent worth anything for jobs that dont need them is the problem?
perhaps requiring university degrees that arent worth anything for jobs that dont need them is the problem?
I don't know?
Kind of sounds like the problem is more fundamental than that. It sounds like the job is not actually there in the first place. Doesn't matter how qualified you are if there's no money to pay you.
For me as a fresh graduate in Poland, the job market doesn't even exist for juniors rn. And I live in a big city. Even internships are but a dream.
It's tempting to connect "AI is kind of like having an always-awake intern" with "nobody is hiring interns" but I'm skeptical. I think this is more about the decline of the corporate enterprise model in general. Exponential growth can't continue forever. Management is trying to force the curve to be exponential by no longer hiring anyone who isn't clearly going to improve profits in the short term.
Corporate models seek to grow exponentially, with hit and miss results. Not sure where there are any signs of that stopping.
More widely:
Businesses, economies, and natural ecosystems, are all full of both exponential drivers and limitations. Since the first life form.
It isn’t a model that is going away. Or that can go away. Unless there are no new opportunities for new things, which seems unlikely anytime soon, there will always be new pockets of exponential growth.
> the decline of the corporate enterprise model in general. Exponential growth can't continue forever.
People have been saying things like that probably since the creating of the corporate model. (Exponential is too much, but I'll take that as exaggeration to make the point.)