ashishb 6 hours ago

Here's my `npm` command these days. It reduces the attack surface drastically.

  alias npm='docker run --rm -it -v ${PWD}:${PWD} --net=host --workdir=${PWD} node:25-bookworm-slim npm'

  - No access to my env vars
  - No access to anything outside my current directory (usually a JS project).
  - No access to my .bashrc or other files.
Ref: https://ashishb.net/programming/run-tools-inside-docker/
  • phiresky 6 hours ago

    That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

    Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.

    • ashishb 5 hours ago

      > Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.

      Imagine you are in a 50-person team that maintains 10 JavaScript projects, which one is easier?

        - Switch all projects to `pnpm`? That means switching CI, and deployment processes as well
        - Change the way *you* run `npm` on your machine and let your colleagues know to do the same
      
      I find the second to be a lot easier.
      • afavour 5 hours ago

        There are a great many extra perks to switching to pnpm though. We switched on our projects a while back and haven’t looked back.

      • fragmede 2 hours ago

        Am I missing something? Don't you also need to change how CI and deployment processes call npm? If my CI server and then also my deployment scripts are calling npm the old insecure way, and running infected install scripts/whatever, haven't I just still fucked myself, just on my CI server and whatever deployment system(s) are involved? That seems bad.

        • ashishb 2 hours ago

          Your machine has more projects, data, and credentials than your CI machine, as you normally don't log into Gmail on your CI. So, just protecting your machine is great.

          Further, you are welcome to use this alias on your CI as well to enhance the protection.

    • ashishb 5 hours ago

      > That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

      I won't execute that code directly on my machine. I will always execute it inside the Docker container. Why do you want to run commands like `vite` or `eslint` directly on your machine? Why do they need access to anything outside the current directory?

      • bandrami 4 hours ago

        I get this but then in practice the only actually valuable stuff on my computer is... the code and data in my dev containers. Everything else I can download off the Internet for free at any time.

        • ashishb 4 hours ago

          No.

          Most valuable data on your system for a malware author is login cookies and saved auth tokens of various services.

          • hinkley 3 hours ago

            Maybe keylogging for online services.

            But it is true that work and personal machines have different threat vectors.

            • spicybright an hour ago

              Yes, but I'm willing to bet most workers don't follow strict digital life hygiene and cross contaminate all the time.

      • apsurd 4 hours ago

        it annoys me that people fully automate things like type checkers and linting into post commit or worse entirely outsourced to CI.

        Because it means the hygiene is thrown over the fence in a post commit manner.

        AI makes this worse because they also run them "over the fence".

        However you run it, i want a human to hold accountability for the mainline committed code.

        • ashishb 4 hours ago

          I run linters like eslint on my machine inside a container. This reduces attack surface.

          How does this throw hygiene over the fence?

          • apsurd 3 hours ago

            Yes in a sibling reply, i was able to better understand your comment to mean "run stuff on my machine in a container"

      • throwaway290 4 hours ago

        It's weird that it's downvoted because this is the way

        • apsurd 4 hours ago

          maybe i'm misunderstanding the "why run anything on my machine" part. is the container on the machine? isn't that running things on your machine?

          is he just saying always run your code in a container?

          • minitech 4 hours ago

            > is the container on the machine?

            > is he just saying always run your code in a container?

            yes

            > isn't that running things on your machine?

            in this context where they're explicitly contrasted, it isn't running things "directly on my machine"

    • worthless-trash 8 minutes ago

      > That seems a bit excessive to sandbox a command that

      > really just downloads arbitrary code you are going to

      > execute immediately afterwards anyways?

      I don't want to stereotype, but this logic is exactly why javascript supply chain is in the mess its in.

    • simpaticoder 5 hours ago

      pnpm has lots of other good attributes: it is much faster, and also keeps a central store of your dependencies, reducing disk usage and download time, similar to what java/mvn does.

    • Kholin 3 hours ago

      I've tried use pnpm to replace npm in my project, it really speed up when install dependencies on host machine, but much slower in the CI containers, even after config the cache volume. Which makes me come back to npm.

  • kernc 3 hours ago

    > alias npm=...

    I use sandbox-run: https://github.com/sandbox-utils/sandbox-run

    The above simple alias may work for node/npm, but it doesn't generalize to many other programs available on the local system, with resources that would need to be mounted into the container ...

    • fingerlocks an hour ago

      Or use ‘chroot’. Or run it as a restricted owner with ‘chown’. Your grandparents solutions to these problems still work.

    • ashishb 2 hours ago

      > The above simple alias may work for node/npm, but it doesn't generalize for many other programs that are available on the local system, with resources that would somehow have to get mounted into the container ...

      Thanks. You are right, running inside Docker won't always work for local commands. But I am not even using local commands.

      Infact, I have removed `yarn`, `npm`, and several similar tools already from my machine.

      It is best to run them inside Docker.

      > I use sandbox-run: https://github.com/sandbox-utils/sandbox-run

      How does this work if my local command is a Mac OS binary? How will it run inside Docker container?

  • bitbasher 4 hours ago

    There are so many vectors for this attack to piggyback off from.

    If I had malicious intentions, I would probably typo squat popular plugins/lsps that will execute code automatically when their editor runs. A compromised neovim or vscode gives you plenty of user permissions, a full scripting language, ability to do http calls, system calls, etc. Most LSPs are installed globally, doesn't matter if you downloaded it via a docker command.

    • ashishb 3 hours ago

      > A compromised neovim or vscode gives you plenty of user permissions, a full scripting language, ability to do http calls, system calls, etc. Most LSPs are installed globally, doesn't matter if you downloaded it via a docker command.

      Running `npm` inside Docker does not solve this problem. However, running `npm` inside Docker does not make this problem worse either.

      That's why I said running `npm` inside Docker reduces the attack surface of the malicious NPM packages.

  • sthuck 6 hours ago

    That definitely helps and worth doing. On Mac though I guess you need to move the entire development to containers due to native dependencies.

    • chuckadams 5 hours ago

      My primary dev environment is containers, but you can do a hell of a lot with nix on a mac.

  • genpfault 6 hours ago

    > triple-backtick code blocks

    If only :(

ab_testing 3 hours ago

Given the recent npm attacks, is it even safe to develop using npm. Whenever I start a react project, it downloads hundreds of additional packages which I have mo idea about what they do. As a developer who has learnt programming as a hobby, is it better to stick to some other safe ways to develop front end like thyme leaf or plain js or something else.

When I build backend in flask or Django, I specifically type the python packages that I need. But front end development seems like a Pandora box of vulnerabilities

  • socalgal2 3 hours ago

    It's no different anywhere else. I just downloaded jj (rust), it installed 470+ packages

    When I downloaded wan2gp (python) it installed it install 211 packages.

    • scuff3d 41 minutes ago

      One of the biggest things that pushes me away from Rust is the reliance on micro dependencies. It's a terrible model.

    • BrouteMinou 3 hours ago

      M'yea, good luck finding such occurrence with NuGet or Maven for example. I would rephrase your "anywhere else".

      NPM is a terrible ecosystem, and trying to defend its current state is a lost cause. The energy should be focused on how to fix that ecosystem instead of playing dumb telling people "it's all ok, look at other, also poorly designed, systems".

      Don't forget that Rust's Cargo got heavily inspired by NPM, which is not something to brag about.[0]

      > "Rust has absolutely stunning dependency management," one engineer enthused, noting that Rust's strategy took inspiration from npm's.

      [0]https://rust-lang.org/static/pdfs/Rust-npm-Whitepaper.pdf

  • nektro 3 hours ago

    this is one of the less talked about benefits of using bun

    • Defletter 2 hours ago

      How does Bun avoid this? Or is it more that Bun provides things that you'd otherwise need a dependency for (eg: websockets)?

  • fragmede 3 hours ago

    Just a heads up that Pypi isn't immune from the same attack, with "Pypi supply chain attack" into Google revealing a (much smaller) number of packages that turned out to be malware. Some were not misspellings either, with one being a legitimate package that got hacked via GitHub Actions and a malicious payload added to the otherwise legitimate package.

crtasm 9 hours ago

>When you run npm install, npm doesn't just download packages. It executes code. Specifically, it runs lifecycle scripts defined in package.json - preinstall, install, and postinstall hooks.

What's the legitimate use case for a package install being allowed to run arbitrary commands on your computer?

Quote is from the researchers report https://www.koi.ai/blog/phantomraven-npm-malware-hidden-in-i...

edit: I was thinking of this other case that spawned terminals, but the question stands: https://socket.dev/blog/10-npm-typosquatted-packages-deploy-...

  • j1elo 9 hours ago

    Easy example that I know of: the Mediasoup project is a library written in C++ for streaming video over the internet. It is published as a Node package and offers a JS API. Upon installing, it would just download the appropriate C++ sources and compile them on the spot. The project maintainers wanted to write code, not manage precompiled builds, so that was the most logical way of installing it. Note that a while ago they ended up adding downloadable builds for the most common platforms, but for anything else the expectation still was (and is, I guess) to build sources at install time.

    • exe34 9 hours ago

      how hard would it be to say "upon first install, run do_sketchy_shit.sh to install requirements"?

      • IgorPartola an hour ago

        Hard. In npm land you install React and 900 other dependencies come with it. And how ok are you reviewing every single one of those scripts and manually running them? Not that it is good that this happens but realistically most people would just say “run all” and let it run instead of running each lifecycle script by hand.

      • SoftTalker 6 hours ago

        But most users would do that without inspecting it at all, and a fair number would prefix it with “sudo” out of habit.

        • nkrisc 4 hours ago

          But that’s at least a conscious and explicit action the user chooses to make and is explicitly aware of making.

        • hombre_fatal 4 hours ago

          That's fine, and it's still better than doing it on install.

      • cyphar 4 hours ago

        rpm and dpkg both provide mechanisms to run scripts on user machines (usually used to configure users and groups on the user machine), so this aspect is not an NPM-specific. Rust has the same thing with build.rs (which is necessary to find shared C libraries for crates that link with them) so there is a legitimate need for this that would be hard to eliminate.

        Personally, I think the issue is that it is too easy to create packages that people can then pull too easily. rpm and dpkg are annoying to write for most people and require some kind of (at least cursory) review before they can be installed on user's systems from the default repos. Both of these act as barriers against the kinds of lazy attacks we've seen in the past few months. Of course, no language package registry has the bandwidth to do that work, so Wild West it is!

        • scheme271 3 hours ago

          rpm and dpkg generally install packages from established repos that vet maintainers. It's not much but having to get one or two other established package authors to vouch for you and having to have some community involvement before you can publish to distro repos is something.

          • cyphar 2 hours ago

            I agree, that is what I talk about in the second paragraph! ;)

      • ares623 2 hours ago

        You see, when you treat everything as a "product", this is what you end up with.

      • lelandbatey 8 hours ago

        People want package managers to do that for them. As much as I think it's often a mistake (if your stuff requires more than expanding archives different folders to install, then somewhere in the stack something has gone quite wrong), I will concede that because we live in an imperfect world, other folks will want the possibility to "just run the thing automatically to get it done." I hope we can get to a world where such hooks are no longer required one day.

        • exe34 8 hours ago

          yes that's why npm is for them. I'd rather download the libraries that I need one by one.

  • squidsoup 9 hours ago

    pnpm v10 disables all lifecycle scripts by default and requires the user to whitelist packages.

    https://github.com/orgs/pnpm/discussions/8945

    • sroussey 7 hours ago

      It’s just security theater in the end. You can just as easily put all that stuff in the package files since a package is installed to run code. You have that code then do all the sketchy stuff.

      What’s needed is an entitlements system so a package you install doesn’t do runtime stuff like install crypto mining software. Even then…

      • Mogzol 6 hours ago

        A package, especially a javascript package, is not necessarily installed to run code, at least not on the machine installing the package. Many packages will only be run in the browser, which is already a fairly safe environment compared to running directly on the machine like lifecycle scripts would.

        So preventing lifecycle scripts certainly limits the number of packages that could be exploited to get access to the installing machine. It's common for javascript apps to have hundreds of dependencies, but only a handful of them will ever actually run as code on the machine that installed them.

      • theodorejb 7 hours ago

        I would expect to be able to download a package and then inspect the code before I decide to import/run any of the package files. But npm by default will run arbitrary code in the package before developers have a chance to inspect it, which can be very surprising and dangerous.

    • chrisweekly 7 hours ago

      One of the many reasons there is no good reason to use npm; pnpm is better in every way.

    • chuckadams 6 hours ago

      PHP composer does the same, in config.allow-plugins.<package> in composer.json. The default behavior is to prompt, with an "always" option to add the entry to composer.json. It's baffling that npm and yarn just let the scripts run with nary a peep.

    • ehutch79 6 hours ago

      Also, you can now pin versions in that whitelist

  • zahlman 7 hours ago

    > doesn't just download packages. It executes code. Specifically, it

    It pains me to remember that the reason LLMs write like this is because many humans did in the training data.

    • marcus_holmes 4 hours ago

      Is the objection the small sentence that could have been a clause?

    • jsrozner 5 hours ago

      That whole koi blog post is sloppy AI garbage, even if it's accurate. So obnoxious.

  • bandrami 4 hours ago

    OK but have you seen how many projects' official installation instructions are some form of curl | bash?

  • DangitBobby 8 hours ago

    I seem to recall Husky at one point using lifecycle hooks to install the git hooks configured in your repository when running NPM install.

  • interstice 8 hours ago

    Notable times this has bitten me include compiling image compression tools for gulp and older versions of sass, oh and a memorable one with openssl. Downloading a npm package should ideally not also require messing around with c compilation tools.

  • vorticalbox 9 hours ago

    One use case is downloading of binaries. For example mongo-memory-server [0] will download the mongoDB binary after you have installed it.

    [0] https://www.npmjs.com/package/mongodb-memory-server

    • 8note 9 hours ago

      why would i want that though, compared to downloading that binary in the install download?

      the npm version is decoupled from the binary version, when i want them locked together

      • jonhohle 8 hours ago

        I think it falls into a few buckets:

        A) maintainers don’t know any better and connect things with string and gum until it most works and ship it

        B) people who are smart, but naive and think it will be different this time

        C) package manager creators who think they’re creating something that hasn’t been done before, don’t look at prior art or failures, and fall into all of the same holes literally every other package manager has fallen into and will continue to fall into because no one in this industry learns anything.

jtokoph 5 hours ago

Keep in mind that the vast majority of the 86,000 downloads are probably automated downloads by tools looking for malicious code, or other malicious tools pulling every new package version looking for leaked credentials.

When I iterate with new versions of a package that I’ve never promoted anywhere, each version gets hundreds of downloads in the first day or two of being published.

86,000 people did not get pwnd, possibly even zero.

  • userbinator 4 hours ago

    Or it's some poor idiot's CI repeatedly downloading them, and for a zombie project that no one will ever use.

  • marcus_holmes 3 hours ago

    As TFA says, they're targeting package names that are somewhere in LLM training data but don't actually exist, so are being hallucinated by LLMs. And there's now a large number of folks with zero clue busy vibe-coding their killer app with no idea that bad things can happen.

    I would not be surprised to find that 80%+ of those 86,000 people got pwned.

creativeSlumber an hour ago

> Many of the dependencies used names that are known to be “hallucinated” by AI chatbots. Developers frequently query these bots for the names of dependencies they need. LLM developers and researchers have yet to understand the precise cause of hallucinations or how to build models that don’t make mistakes. After discovering hallucinated dependency names, PhantomRaven uses them in the malicious packages downloaded from their site.

I found it very interesting that they used common AI hallucinated package names.

650REDHAIR 9 hours ago

As a hobbyist how do I stay protected and in the loop for breaches like this? I often follow guides that are popular and written by well-respected authors and I might be too flippant with installing dependencies trying to solve a pain point that has derailed my original project.

Somewhat related, I also have a small homelab running local services and every now and then I try a new technology. occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot to infiltrate because I’m not sophisticated enough to stop it.

Where do I start?

  • socalgal2 3 hours ago

    (1) Start by not using packages that have stupid dependencies

    Any package that includes a CLI version in the library should have it's dev shamed. Usually that adds 10-20 packages. Those 2 things, a library that provides some functionality, and a CLI command that lets you use the library from the command line, SHOULD NEVER BE MIXED.

    The library should be its own package without the bloat of the command line crap

    (2) Choose low dependency packages

    Example: commander has no dependencies, minimist now has no dependencies. Some other command line parsers used to have 10-20 dependencies.

    (3) Stop using packages when you can do it yourself in 1-2 lines of JS

    You don't need a package to copy files. `fs.copyFileSync` will copy a file for. `fs.cpSync` will copy a tree, `child_process.spawn` will spawn a process. You don't need some package to do these things. There's plenty of other examples where you don't need a package.

  • numbsafari 8 hours ago

    Don't do development on your local machine. Full stop. Just don't.

    Do development, all of it, inside VMs or containers, either local or remote.

    Use ephemeral credentials within said VMs, or use no credentials. For example, do all your git pulls on your laptop directly, or in a separate VM with a mounted volume that is then shared with the VM/containers where you are running dev tooling.

    This has the added benefit of not only sandboxing your code, but also making your dev environments repeatable.

    If you are using GitHub, use codespaces. If you are using gitlab, workspaces. If you are using neither, check out tools like UTM or Vagrant.

    • bigstrat2003 4 hours ago

      That's not a realistic solution. Nobody is going to stop using their machine for development just to get some security gains, it's way too much of a pain to do that.

      • socalgal2 3 hours ago

        You are right, if it's a pain no one is going to do it. So the thing that needs to happen is to make it not a pain.

      • fragmede 2 hours ago

        The way to sell it isn't vague security somethings, but in making it easier to reproduce the build environment "from scratch". If you build the Dockerfile as you go, then you don't waste hours at the end trying to figure out what you did to get it to build and run in the first place.

    • suck-my-spez 7 hours ago

      Are people actually using UTM to do local development?

      Im genuinely curious because I casually looked into it so that i could work on some hobby stuff over lunch on my work machine.

      However I just assumed the performance wouldn't be too great.

      Would love to hear how people are setup…

      • rickstanley 5 hours ago

        When I had a Macbook from work, I set up an Arch Linux VM using their basic VM image [1], and followed these steps (it may differ, since is quite old): https://www.youtube.com/watch?v=enF3zbyiNZA

        Then, I removed the graphical settings, as I was aiming to use SSH instead of emulated TTY that comes ON by default with UTM (at that time).

        Finally, I set up some basic scripting to turn the machine on and SSH into it as soon as sshd.service was available, which I don't have now, but the script finished with this:

        (fish shell)

            while not ssh -p 2222 arch@localhost; sleep 2; end;
        
        Later it evolved in something like this:

            virsh start arch-linux_testing && virsh qemu-monitor-command --hmp arch-linux_testing 'hostfwd_add ::2222-:22' && while not ssh -p 2222 arch@localhost; sleep 2; end;
        
        I also removed some unnecessary services for local development:

            arch@archlinux ~> sudo systemctl mask systemd-time-wait-sync.service 
            arch@archlinux ~> sudo systemctl disable systemd-time-wait-sync.service
        
        
        And done, performance was really good and I could develop on seamlessly.

        [1]: https://gitlab.archlinux.org/archlinux/arch-boxes/-/packages...

      • hombre_fatal 4 hours ago

        I started using UTM last week on my Macbook just to try out NixOS + sway and see if I could make environment that I liked using (inspired by the hype around Omarchy).

        Pretty soon I liked using the environment so much that I got my work running on it. And when I change the environment, I can sync it to my other machine.

        Though NixOS is particularly magical as a dev environment since you have a record of everything you've done. Every time I mess with postgres hb_conf or nginx or pcap or on my local machine, I think "welp, I'll never remember that I did that".

      • suchar 5 hours ago

        With remote development (vscode and remote extension in jetbrains with ssh to VM) performance is good with headless VM in UTM. Although it always (?) uses performance cores on Apple Silicon Macs, so battery drain is a problem

  • jonhohle 8 hours ago

    There are some operating systems, like FreeBSD, where you use the system’s package manager and not a million language specific package managers.

    I still maintain pushing this back to library authors is the right thing to do instead of making this painful for literally millions of end-users. The friction of getting a package accepted into a critical mass of distributions is the point.

  • Etheryte 9 hours ago

    Use dependencies that are fairly popular and pick a release that's at least a year old. Done. If there was something wrong with it, someone would've found it by now. For a hobbyist, that's more than sufficient.

  • marcus_holmes 3 hours ago

    Somewhat controversial these days, but treat every single dependency as a potential security nightmare, source of bugs, problem that you will have to solve in the future. Use dependencies carefully and as a last resort.

    Vendoring dependencies (copying the package code into your project rather than using the package manager to manage it) can help - it won't stop a malicious package, but it will stop a package from turning malicious.

    You can also copy the code you need from a dependency into your code (with a comment giving credit and a link to the source package). This is really useful if you just need some of the stuff that the package offers, and also forces you to read and understand the package code; great practice if you're learning.

    • devsda 3 hours ago

      Inspecting 10 layers of dependencies individually to install a popular tool or an lsp server is going to work once or twice. Eventually either complacency or fatigue sets in and the attacker wins.

      I think we need a different solution that fixes the dependency bloat or puts more safeguards around package publishing.

      The same goes for any other language with excessive third-party dependency requirements.

      • marcus_holmes an hour ago

        Agree.

        It's going to take a lot of people getting pwned to change these attitudes though

  • pier25 5 hours ago

    Avoid dependencies with less than 1M downloads per week. Prefer dependencies that have zero dependencies like Hono or Zod.

    https://npmgraph.js.org/?q=hono

    https://npmgraph.js.org/?q=zod

    Recently I switched to Bun in part because many dependencies are already included (db driver, s3 client, etc) that you'd need to download with Node or Deno.

  • uyzstvqs 6 hours ago

    I'm not sure about NPM specifically, but in general: Pick a specific version and have your build system verify the known good checksum for that version. Give new packages at least 4 weeks before using them, and look at the git commits of the project, especially for lesser-known packages.

  • jhancock 5 hours ago

    As 'numbsafari said below, you should no longer user your host for dev..this includes all those cool AI assistant tools. You need to containerize all the things with runpod or docker

  • ajross 9 hours ago

    > As a hobbyist how do I stay protected and in the loop for breaches like this?

    For the case of general software, "Don't use node" would be my advice, and by extension any packaging backend without external audit and validation. PyPI has its oopses too, Cargo is theoretically just as bad but in practice has been safe.

    The gold standard is Use The Software Debian Ships (Fedora is great too, arch is a bit down the ladder but not nearly as bad as the user-submitted madness outside Linux).

    But it seems like your question is about front end web development, and that's not my world and I have no advice beyond sympathy.

    > occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot

    Pretty much that's the problem exactly. Distributing software is hard. It's a lot of work at a bunch of different levels of the process, and someone needs to commit to doing it. If you aren't willing to commit your time and resources, don't distribute it in a consumable way (obviously you can distribute what you built with it, and if it's appropriately licensed maybe someone else will come along and productize it).

    NPM thought they could hack that overhead and do better, but it turns out to have been a moved-too-fast-and-broke-things situation in hindsight.

    • zahlman 6 hours ago

      > PyPI has its oopses too, Cargo is theoretically just as bad but in practice has been safe.

      One obvious further mitigation for Python is to configure your package installer to require pre-built wheels, and inspect the resulting environment prior to use. Of course, wheels can contain all sorts of compiled binary blobs and even the Python code can be obfuscated (or even missing, with just a compiled .pyc file in its place); but at least this way you are protected from arbitrary code running at install time.

    • squidsoup 9 hours ago

      Having spent a year trying to develop against dependencies only provided by a debian release, it is really painful in practice. At some point you're going to need something that is not packaged, or newer than the packaged version in your release.

      • LtWorf 7 hours ago

        That's when you join debian :)

      • ajross 8 hours ago

        It really depends on what you're doing. But yes, if you want to develop in "The NPM Style" where you suck down tiny things to do little pieces of what you need (and those things suck down tiny things, ad infinitum) then you're naturally exposed to the security risks inherent with depending on an unaudited soup of tiny things.

        You don't get secure things for free, you have to pay for that by doing things like "import and audit software yourself" or even "write simple utilities from scratch" on occasion.

    • paulryanrogers 7 hours ago

      Didn't Debian ship a uniquely weak version of OpenSSL for years? HeartBleed perhaps?

      IME Debian is falling behind on security fixes.

      • ajross 6 hours ago

        They did, and no one is perfect. But Debian is the best.

        FWIW, the subject at hand here isn't accidentally introduced security bugs (which affect all software and aren't well treated by auditing and testing). It's deliberately malicious malware appearing as a dependency to legitimate software.

        So the use case here isn't Heartbleed, it's something like the xz-utils trojan. I'll give you one guess as to who caught that.

    • megous 7 hours ago

      As a hobyist (or profesionally) you can also write code without dependencies outside of node itself.

gbransgrove 7 hours ago

Because these are fetching dependencies in the lifecycle hooks, even if they are legitimate at the moment there is no guarantee that it will stay that way. The owner of those dependencies could get compromised, or themselves be malicious, or be the package owner waiting to flip the switch to make existing versions become malicious. It's hard to see how the lifecycle hooks on install can stay in their current form.

severino 7 hours ago

I wonder what could one do if he wants to use NPM for programming with a very popular framework (like Angular or Vue) and stay safe. Is just picking a not very recent version of the top level framework (Angular, etc.) enough? Is it possible to somehow isolate NPM so the code it runs, like those postinstall hooks, doesn't mess with your system, while at the same time allowing you to use it normally?

  • theodorejb 7 hours ago

    One option to make it a little safer is to add ignore-scripts=true to a .npmrc file in your project root. Lifestyle scripts then won't run automatically. It's not as nice as Pnpm or Bun, though, since this also prevents your own postinstall scripts from running (not just those of dependencies), and there's no way to whitelist trusted packages.

edoceo 10 hours ago

Happy I keep a mirror of my deps, that I have to "manually" update. But also, the download numbers are not really accurate for actual install count - for example each test run could increment.

cxr 9 hours ago

Imagine if we had a system where you could just deposit the source code for a program you work on into a "depository". You could set it up so your team could "admit" the changes that have your approval, but it doesn't allow third parties to modify what's in your depository (even if it's a library that you're using that they wrote). When you build/deploy your program, you only compile/run third-party versions that have been admitted to the depository, and you never just eagerly fetch other versions that purport to be updates right before build time. If there is an update, you can download a copy and admit it to your repo at the normal time that you verify that your program actually needs the update. Even if it sounds far-fetched, I imagine we could get by with a system like this.

  • chrisweekly 9 hours ago

    You're describing a custom registry. These exist IRL (eg jFrog Artifactory). Useful for managing allow-listed packages which have met whatever criteria you might have (eg CVE-free based on your security tool of choice). Use of a custom registry, and a sane package manager (pnpm, not npm), and its lockfile, will significantly enhance your supply-chain security.

    • cxr 7 hours ago

      No. I am literally describing bog standard use of an ordinary VCS/SCM where the code for e.g. Skia, sqlite, libpng, etc. is placed in a "third-party/" subdirectory. Except I'm deliberately using the words "admit" and "depository" here instead of "commit" and "repository" in keeping with the theme—of the widespread failure of people to use SCMs to manage the corresponding source code required to build their product/project.

      Overlay version control systems like NPM, Cargo, etc. and their harebrained schemes involving "lockfiles" to paper over their deficiencies have evidently totally destroyed not just folks' ability to conceive of just using an SCM like Git or Mercurial to manage source the way that they're made for without introducing a second, half-assed, "registry"-dependent VCS into the mix, but also destroyed the ability to recognize when a comment on the subject is dripping in the most obvious, easily detectable irony.

      • minitech 3 hours ago

        Yeah, people invented the concept of packages and package management because they couldn’t conceive of vendoring (which is weird considering basically all package managers make use of it themselves) and surely not because package management has actual benefits.

        Maybe in a perfect world, we’d all use a better VCS whose equivalent of submodules actually could do that job. We are not in that world yet.

        • cxr 2 hours ago

          Do you understand the reasons, and are you able to clearly articulate them? Are you able to describe the tangible benefits in the form of a set of falsifiable claims—without resorting to hand-waving or appeals to the perceived status quo or scoffing as if the reasons are self-evident and not in question or subject to scrutiny?

      • morshu9001 6 hours ago

        Does the lockfile not solve this?

        • socalgal2 3 hours ago

          not really, because you can't easily see what changed when you get a new version. When you check in the third_party repo to your VSC, then when you get a new version, everything that changed is easily visible `git diff` before you commit the new changes. With a lockfile, the only diff is the hash changed.

          • cyphar 2 hours ago

            Not if you use git submodules, which is how most people would end up using such a scheme in practice (and the handful of people that do this have ended up using submodules).

            Go-style vendoring does dump everything into a directory but that has other downsides. I also question how effectively you can audit dependencies this way -- C developers don't have to do this unless there's a problem they're debugging, and at least for C it is maybe a tractible problem to audit your entire dependency graph for every release (of which there are relatively few).

            Unfortunately IMHO the core issue is that making the packaging and shipping of libraries easy necessarily leads to an explosion of libraries with no mechanism to review them -- you cannot solve the latter without sacrificing the former. There were some attempts to crowd-source auditing as plugins for these package managers but none of them bore fruit AFAIK (there is cargo-audit but that only solves one part of the puzzle -- there really needs to be a way to mark packages as "probably trustworthy" and "really untrustworthy" based on ratings in a hard-to-gamify way).

          • minitech 3 hours ago

            The problem is that not enough people care about reviewing dependencies’ code. Adding what they consider noise to the diff doesn’t help much (especially if what you end up diffing is actually build output).

        • cxr 6 hours ago

          What is "this"?

      • chrisweekly 6 hours ago

        Huh? "Just use git" is kind of nonsensical in the context of this discussion.

        • cxr 5 hours ago

          Oh, okay.

  • kej 9 hours ago

    Now you have the opposite problem, where a vulnerability could be found in one of your dependencies but you don't get the fix until the next "normal time that you verify that your program actually needs the update".

    • edoceo 8 hours ago

      If a security issue is found that creates the "normal time".

      That is, when a security issue is found, regardless of supply chain tooling one would update.

      That there is a little cache/mirror thing in the middle is of little consequence in that case.

      And for all other cases the blessed versions in your mirror are better even if not latest.

  • zahlman 6 hours ago

    So, vendoring?

  • lenkite 9 hours ago

    Well in the Java world, Maven had custom repositories which did this for the last 20+ years.

  • anthk 9 hours ago

    You are describing BSD ports from the 90's. FreeBSD ports date back to 1993.

  • edoceo 9 hours ago

    That is exactly what I do.

akagusu 4 hours ago

Unpopular opinion: why not reduce the dependency on 3rd party packages? Why not reduce the number of dependencies so you can know what code you are using?

  • gavmor 18 minutes ago

    Because then I would have to test, write, and maintain that code—and it becomes susceptible to leaky abstractions!

  • BobbyTables2 4 hours ago

    I’ve wondered this for so long, I questioned my own sanity.

Uptrenda 4 hours ago

I dub thee "node payload manager."

worik 6 hours ago

This has been going on for years now.

I have used Node, I would not go near the NPM auto install Spyware service.

How is it possible that people keep this service going, when it has been compromised so regularly?

How's it possible that people keep using it?

noosphr 6 hours ago

A day ago I got down voted to hell for saying that the JavaScript ecosystem has rotted the minds of developers and any tools that emulate npm should be shunned as much as possible - they are not solutions, they are problems.

I don't usually get to say 'I told you so' within 24 hours of a warning, but JS is special like that.

  • cogman10 5 hours ago

    There's nothing really specially about the JS ecosystem that creates this problem. Plenty of others could fall in the same way, including C++ (see xz).

    The problem is we've been coasting on an era where blind trust was good enough an programming was niche enough.

ghusto 9 hours ago

When people ask me what's so wrong with lowering the bar of entry for engineering, I point to things like this.