badsectoracula a day ago

This reminds me when i worked on an open world game ~12 years ago. I implemented spline-based "stripes" (think long decals), mainly to be used for roads (though could also be used for smaller things like blood trails) and i added a button in the editor to align the terrain with the stripe with a configurable falloff so that it transitions smoothly from the stripe to the surrounding terrain - and optionally you could use a (grayscale) texture to modulate the falloff so that the transition isn't completely smooth.

Next thing i knew, the environment artists had started using stripes to sculpt the terrain :-P

  • Minor49er a day ago

    You're Bad Sector from the Rome.ro forums (way back when those existed), aren't you?

dahart 12 hours ago

This is cool. It’d be interesting to compare the authoring UI & workflow & relative merits of curve-authored terrain to surfel-authored terrain. And those should be compared to patch & poly workflows, as well as painting tools like Zbrush. I can imagine authoring unstructured curves to model a surface can be quite nice for certain things, but you probably do lose some advantages in other methods.

> I cannot stress this enough the thing we want here is the thing you're probably used to calling "the surface normal", but—for reasons I am not responsible for—the thing we want is instead called the "binormal" here and the thing that is called the "normal" is instead a different thing. Why did the mathematicians do this to us?!

In case the author is reading, this paragraph has a some misconceptions in it. Mathematicians did not cause your curve binormal to line up with your surface normal, that’s something you have control over. Curve normals will not usually be tangent to your surface, and binormals will not usually line up with a surface normal. There are multiple valid ways to define a curve normal & binormal. The most famous curve normal is the “Frenet” normal, which is by definition pointing in the direction of curvature. If you use the Frenet frame, then the curve normal will be close to (but not exactly the same as) what you want for reconstructed surface normal.

You can for sure set up your authoring system so that a curve normal you define is the surface normal you want, and the curve binormal becomes one of the surface tangents. (Curve tangent will be the other surface tangent.)

One thing that might be really useful to recognize is that even when curves have an intrinsic normal and binormal provided by your curve authoring tools, you can assign your own normal and binormal to the curve, and encode them as offsets from the intrinsic frame. (The “Frame” is the triple of [tangent,normal,binormal].) If Blender is letting you control the curve normals without moving control points, then it’s already doing this for you, it already has two different frames, and internal one you don’t see and a visible one you can manipulate.

pavlov a day ago

It’s nice to see some real-world graphics programming in a visual language.

Node-based environments can be powerful for exploring solutions like this, but they don’t often make it to Hacker News.

  • badsectoracula a day ago

    They can also be quite messy and hard/annoying to follow/parse since a lot of screen real estate is used for trivial things. For example the first graph wastes 300x250 pixels several times for what would be a single character in a text-based language (and that's if you ignore the padding between the nodes since you could claim that whitespace in a text-based language serves the same purpose).

    And unlike a text-based language where there is (usually) a single flow path going towards a single direction, with graphics like the one in the article you have several "flows" so your eyes need to move all over the place all the time to connect the "bits" in order to figure out what is going on.

    Though that last part could probably be solved by using DRAKON graphs instead (which can also use less screen space) since those have a more strict flow.

    IMO graph-based visual languages are nice for very high level flows but the moment you need to use more than a couple "dot product" or "add values" nodes, it is time to switch to a text-based one.

    • pavlov a day ago

      I agree that large node graphs get unwieldy and nobody has come up with a great solution for that. Macros and independently scalable groups and visual annotations can help, but it depends on the type of program.

      I disagree about the single-direction “1D” nature of textual programs being unequivocally a benefit. When there are many independent paths that eventually combine, it’s easier to see the complete flow in a 2D graph. Linear text creates the illusion of dependency and order, even if your computation doesn’t actually have those properties.

      Conserving screen space is a double-edged sword. If that were the most important thing in programming, we’d all be writing APL-style dense one-liners for production. But code legibility and reusability is a lot more than just who does the most with the least symbols.

      • badsectoracula a day ago

        > I disagree about the single-direction “1D” nature of textual programs being unequivocally a benefit. When there are many independent paths that eventually combine, it’s easier to see the complete flow in a 2D graph.

        This is where having separate functions help - as a bonus you can focus on that independent path without the rest being a distraction.

        If there are multiple independent paths that connect at separate points where you can't isolate them easily if they were a function, the graph is already a mess.

        > Conserving screen space is a double-edged sword. If that were the most important thing in programming, we’d all be writing APL-style dense one-liners for production.

        That is a bit of an extreme case, it isn't about conserving all the screen real estate, just not being overly wasteful. After all the most common argument you'd see about -say- Pascal-style syntax is that it is too verbose compared to C-like syntax, despite the difference not being that great. You'll notice that despite APL-like syntax not being very popular, COBOL-like syntax isn't popular either.

        You don't have to go to extremes - notice that in my comment i never wrote that all uses of graphs should be avoided.

    • nkrisc a day ago

      I would wager that Blender's geometry nodes are over all a net saving in screen real estate when compared to the amount of code they abstract away. Sure, in a trivial example they seem unnecessarily large, but there are some nodes that do a lot of heavy lifting, and are no bigger than any other node. Overall a strange metric to track, IMO, unless all your variable names are one letter.

      • badsectoracula a day ago

        Which is why i wrote that last paragraph: for high level stuff with nodes that abstract the heavy lifting they are fine. Sometimes you may even need these "add" and "dot product" nodes to glue these together. The issue is when you start using a lot of "low level" nodes.

        Think of it like using a shell script vs something like Python: you can do a ton of things in shell scripts and write very complex shell scripts, but chances are if a shell script is more than a few lines that glue other programs together, it'd be better to use Python (or something similar) despite both being "scripting" languages.

  • nkrisc a day ago

    There are times when using Blender's shading node graph that I'd really just rather be writing GLSL/HLSL. But overall I still like it.

    Geometry nodes, on the other hand, I think are amazing. They really do provide a very useful abstraction over what would be a lot of tedious and error-prone boiler plate code.

    With just a few nodes I can instances and properly rotate some mesh on every face of some other mesh. Boom, done in two minutes. And you can do so, so much more, of course.

    The obvious downside is in complex scenarios the graph can be more difficult to manage and because of the nature of how its evaluated there are times you need to think about "capturing" attributes (basically a variable to hold an older value of something that gets changed later). But nothing is perfect.

swiftcoder 15 hours ago

Oh, these are really cool. Excited to see more of this sort of workflow in the wild.

kvark a day ago

They developed a content generation tool based on splines, as well as a rendering algorithm based on finding the closest splines to each point. They are claiming real-time rendering (4k at 120hz) on CPU... why not run this on GPU?

  • qwertox a day ago

    Because you can use tools on the splines. You can't do that on the GPU.

  • felipellrocha a day ago

    Boy... I hate questions of the form "Why not just". Which, to be fair... is not the exact form, but the intent of the question.

  • dcrazy a day ago

    Run _what_ on the GPU? All of Blender?

  • jvanderbot a day ago

    If the claims of real-time use are true, why bother?

    • kvark a day ago

      Because it has to go through gpu anyway before it reaches the screen, gpu can be more efficient at doing this (better battery, etc), and we are wasting time transferring the pixels to gpu where the splines would be much more compact.

      • kevingadd a day ago

        minor nit: it seems like they're not rasterizing every pixel on cpu, instead just generating heightmap values instead, which is a lot lower resolution?

        and games like Dreams have proven that you can ship world class experiences using CPU rasterization. If it's easier and it performs good enough, nothing wrong with it.

        • phire a day ago

          We there are two different things here.

          The custom CPU rasteriser (Star Machine) that pushes 4k 120hz is mentioned in the intro, but the implementation of spline-based terrain covered by the article is just a prototype developed in blender. Blender is used for faster rapid iteration of the algorithm.

          While the Blender version is at least partially GPU accelerated, the final implementation in Star Machine will be entirely on the CPU. It's currently unknown if the CPU implementation will trace against the cached height map or against a sparse point cloud (also cached)

          • pragmatic8 a day ago

            I looked at the Star Machine repository and it looks like its using SDL_gpu [1,2], so I am a little confused about where the 'CPU' rasteriser designation comes from.

            [1] https://github.com/Aeva/star-machine/blob/excelsior/star_mac...

            [2] https://wiki.libsdl.org/SDL3/CategoryGPU

            • phire a day ago

              I haven't read though the whole thing, but my rough understanding is:

              - A ray tracer runs on the CPUs, and generates surfels (aka splats)

              - The surfels are uploaded to the GPU

              - Then the GPU rasterizes the surfels into a framebuffer (and draws the UI, probably other things too)

              So it's the ray tracing that's running on the CPU, not the rasterizer. Compared to a traditional CPU ray tracer, the GPU is not idle and still doing what it does best (memory intensive rasterization), and the CPU can do the branchy parts of ray tracing (which GPU ray tracing implementations struggle with).

              The number of surfels can be adjusted dynamically per frame, to keep a consistent framerate, and there will be far less surfels than pixels, reducing the PCIe bandwidth requirements. The surfels can also by dynamically allocated within a frame, focusing on the parts with high-frequency detail.

              It's an interesting design, and I'm curious what else is possible. Can you do temporal reproduction of surfels? Can you move materials partially onto the GPU?