Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Exploring Euclideon's Unlimited Detail Engine (gameinformer.com)
119 points by timknauf on Nov 24, 2011 | hide | past | favorite | 89 comments


The article sensationally positions this as some incredible breakthrough that the "old guard" of gaming is trying to suppress. More likely, the code works, but has limitations -- the same limitations that led old guard luminaries like Carmack to defer the idea for another few years.

As others have pointed out, voxel-based games have been around for a long time; a recent example is the whimsical "3D Dot Game Hero" for PS3, in which they use the low-res nature of the voxel world as a fun design element.

Voxel-based approaches have huge advantages ("infinite" detail, background details that are deformable at the pixel level, simpler simulation of particle-based phenomena like flowing water, etc.) but they'll only win once computing power reaches an important crossover point. That point is where rendering an organic world a voxel at a time looks better than rendering zillions of polygons to approximate an organic world. Furthermore, much of the effort that's gone into visually simulating real-world phenomena (read the last 30 years of Siggraph conference proceedings) will mostly have to be reapplied to voxel rendering. Simply put: lighting, caustics, organic elements like human faces and hair, etc. will have to be "figured out all over again" for the new era of voxel engines. It will therefore likely take a while for voxel approaches to produce results that look as good, even once the crossover point of level of detail is reached.

I don't mean to take anything away from the hard and impressive coding work this team has done, but if they had more academic background, they'd know that much of what they've "pioneered" has been studied in tremendous detail for two decades. Hanan Samet's treatise on the subject tells you absolutely everything you need to know, and more: (http://www.amazon.com/Foundations-Multidimensional-Structure...) and even goes into detail about the application of these spatial data structures to other areas like machine learning. Ultimately, Samet's book is all about the "curse of dimensionality" and how (and how much) data structures can help address it.

In the late 90s at Naughty Dog, I used Samet's ideas (octrees in particular) for collision detection in the Crash Bandicoot games. In those games, the world was visually rendered with polygons, but physically modeled -- for collision detection purposes, at least -- with an octree. The nice thing about octrees is that they are very simple to work with and self-calibrate their resolution dynamically, making them very space-efficient. Intuitively, a big region of empty air tends to be represented by a handful of huge cubes, while the individual fronds of a fern get coated with dozens or hundreds of tiny cubes, because there's more surface detail to account for in the latter example.

I think the crossover point I mentioned earlier will come when GPUs become general-purpose enough to allow massively parallel voxel rendering implementations. That's what surprised me most about this article: they crow that it's a CPU-only technology... why? GPUs excel at tasks involving vast amounts of relatively simple parallel computation.

Prior to the crossover point, we'll see a bunch of cool games that use voxel rendering primarily for gameplay reasons. These games will look chunky compared to their polygonal peers, but will offer unique experiences. Minecraft is a good example. (I'm assuming it's voxel-based, but don't really know.)


You have missed the point. The claim is that Unlimited Detail offers real-time voxel rendering with unlimited detail now, not when there's more powerful computers. It doesn't need a GPU. They supposedly have a new algorithm, which would indeed be revolutionary if true.


If someone were to post an article on HN about a new technology that gets recursive compression (you can compress something 80%, and then do that again and again) because "they have a new algorithm", they would be laughed off the front page.


I don't argue for or against that 'new algorithm', just the topic we are talking about. Nevertheless, I haven't seen a proof that voxels can't be rendered more efficiently (although I think it's very improbable), while infinite compressibility is easy to prove to be impossible.

Edit: You didn't clearly get what is claimed here. 'Infinite detail' is just a marketing speech, the claim is that the algorithm can render up to screen resolution any data set whose size is limited only by memory. I.e. rendering speed isn't bounded by the size of the data set. The method has nothing to do with compression, and indeed the data set will be huge if everything is 'infinitely' detailed.


I think the crossover point I mentioned earlier will come when GPUs become general-purpose enough to allow massively parallel voxel rendering implementations

Indeed. This is certainly possible with modern GPUs, and has been for a while. Voxels are pretty well-suited to GPU rendering. See the research I linked to in another reply in this thread.

I didn't get around to reading their entire article, but if they call it a CPU-only technique they are behind the curve instead of ahead.


Well on the other hand, if they can already do this with just CPU, it can only get better as they unleash the power of a GPU as well, right? Especially as you say, voxels are pretty well-suited to parallel rendering like that.


That's what surprised me most about this article: they crow that it's a CPU-only technology... why?

Because they don't know where raytracing research has gotten to with GPGPUs. Indeed if a few years ago you had told me that GPGPUs would be handling the insanely branching search of raytracing I'd have told you you were mad. Now we have iray. Bottom line: yes, its a staggeringly inefficient algorithm compared to a CPU, but I've got 480 cores!!!

I'll place money that this technique is just raytracing with a nice, hierarchical voxel tree. Plenty of work being/been done on combining such spatial subdivisions with SIMD. Usually the search ends on a polygon with a collision test. The idea here, which I havent come across (but I've not read every paper) is to store the subdivision down to sub-poly level so that you can skip the collision test. I hope they still share surface information between voxels, but maybe not. Essentially there are plenty of researchers who could implement their algorithm right now by adding "return true" for their poly hit test and increasing the subdivision threshold.

As Carmack said 114 days ago, "Nice idea. Be a few years till its viable."

But I'd be surprised if its this team that delivers the goods. They seem woefully uninterested in the wealth of research in the area.


In the context of voxel-based games, a vote for Voxatron: http://www.lexaloffle.com/voxatron.php


Someone correct me if I'm wrong, but I'm not sure that either Voxatron or 3D Dot Game Heroes actually use voxel based rendering. They may or may not store their models as voxels but I'm pretty sure that at some point it's converted into a polygonal model and shoved into a traditional polygon based rasterizing pipeline. As opposed to doing some raycasting into a datastructure containing voxels (e.g. an octree), which from my very limited understanding is roughly what a voxel based rendering pipeline looks like.

Doesn't take away the fact that both games look pretty damn nifty though.


Voxatron absolutely uses "true" voxel rendering.

> Voxatron is based on a virtual 128x128x64 display. It's a buffer of 3d video memory that is rendered out to the screen at the end of each frame, much as an old-school 2d display is. You can POKE bytes into the virtual memory, and they come out as voxels. I don't compromise on this -- even the menus are drawn into the voxel display. Hopefully one day I can get hold of a real physical 128x128x64 display and play Voxatron on it with almost no modification.

> The renderer is written in software (C + SDL). Each frame, scan through the virtual video memory back to front and look for voxels that have empty neighbours. If they are exposed, I transform the corners of a cube intro screen space and scan-render the polygon. Shadows are done with a traditional shadow-map but sampled with a filter to get the soft shadows.

http://www.lexaloffle.com/bbs/?tid=260


I didn't realize they use their own renderer. Pretty neat.

I guess I was thinking that the raycasting approach I've seen used in some of the sparse voxel octree stuff I read wouldn't make sense when you want the blockiness of voxels to actually show up very clearly, plus that it looked like you could achieve the same look using polygon based cubes. Guess I should've done a bit more research.


You can do something like Voxatron with raycasting and it will look the same. It would just be inefficient with such big voxels. It becomes efficient when the voxels approach the size of 2D screen pixels.


In the Comanche series, "voxels" were a selling point on the box.

http://www.youtube.com/watch?v=Ku-ICQvQJGI&sns=em

http://en.wikipedia.org/wiki/Comanche_series

No story I've seen on this engine seems to mention that series.


The problem with NovaLogic's VoxelSpace engine (used in Delta Force, Comanche and Armored Fist series) is that it's using a height map, and to speed up rendering, "locks" the Z axis to simplify a bunch of formulas to render voxels comprising the terrain much faster. Comanche 1 and 2 used sprites so that wasn't as noticeable, but Comanche 3 introduced polygon-based models, where the third axis speedup produces uneven deformation when looking up or down (you can see it in the video). You can see the same problem in OutCast. The choice of an engine with such limitations for a tank an helicopter and a ground soldier game is interesting, and you can notice how most of the gameplay in those games involves long range action along the horizontal plane. Notice how NovaLogic jet fighters games do not use VoxelSpace but a polygon-based terrain engine.

Today voxel-based renderers must allow correctly projected 6-DOF orientation of the viewport, and not just render terrain but arbitrary objects, and integrate together with various lightning and shader effects to stand any chance outside the neo-retro or abstract category.


"voxels" are also a demoscene term for a method of rendering height-mapped landscapes in 3D, using a kind of raycasting technique. Technically they are "voxels", but the algorithm and their use is quite different from the sort of thing talked about here.

one thing is that only the landscape is rendered as "voxels" and it's mostly a textured height map, so it can't contain multi-level structures such as bridges, etc. it's best suited for mountain ranges and beaches and things like that.

just the term "voxels" is quite generic already, and doesn't really describe one algorithm over another, except that you're not using polygons.


W/R to low-res voxel and octree engines, I think we can develop their visual detail considerably further without throwing over the table and trying to eliminate polygons. Nadeo has been taking the "large model blocks" approach and refining it for almost a decade now:

http://www.pcgamer.com/2011/05/27/mad-mad-world/


I was going to mention minecraft here, as it's using a voxel system where by the local volume is divided into "chunks" that are generated based on an algorithm or saved data. This has simulation advantages over similar software which uses a polygon mesh(e.g. 'From Dust'). Editing in polygons are merely computational changes to the surface, rather than removing voxel by voxel.) Editing the world in 'From Dust' is thus not too dissimilar to polygon geometry modelling in 3d software.

A simple example of the difference this affords is that large dynamic changes(e.g. a tornado 'mod') are the result of simply moving voxels around and letting the world renderer do it's thing. Where a polygon based system has to computationally alter the various surfaces to represent the changes made, which can be quite a bit more 'work' for programming the various scenarios.


Unfortunately, this engine can only create an entire world out of voxels because it uses insane levels of repetition, i.e. compression. As soon as you do any of the kinds of interesting things that voxels let you do, you have instantly lost that repetition, and your "42 trillion" voxels are suddenly 42 trillion bytes. GLWT.

Ironically, I see pretty effective tornado effects in movies all the time, using polygons. An audience will happily believe that a tornado is composed of solid pieces of geometry (such as a car, or a cow, or individual roofing tiles) because they've seen it on TV. Tornadoes do not deconstitute matter as far as I'm aware. Thus hierarchical polygon models are ideal for tornadoes.


I feel like you didn't actually read my comment. Minecraft is a perfect example of how non-repetitive voxel worlds are processor intensive, and rather, this serves as a demonstration that unless fractal-like repetition is used then current day computers are not up to the task of detailed voxel worlds. (minecraft is a cpu hog.) It's trivially clear that this example is the result of repetition and not any breakthrough technology.

Also I feel like your example is confused, I'm trying to be polite, but I think you should do some research about what this is before writing a counterpoint like you have. (what you've written isn't really meaningful.)

Let's take an example of a whirl wind in a desert or dusty plain, in a polygon mesh you're not seeing individual sand grains being lifted from the polygon surface then flying around and landing somewhere else forming a pile, as you have suggested. In a voxel simulation you would however see just that.

Now ignoring post production, what you're seeing is a well textured, but (crucially) introduced, particle effect and maybe some surface deformation for the sandy hill that is being destroyed. (This isn't how the effect is done in film by the way.)

Translating (i.e moving) cows and other objects is actually more on par with how voxels work. I.e. moving an object through space and placing it somewhere else. So you were half right in your thinking that way.

To finalise my point, the tornado "mod" example that I cited wasn't me discounting polygons as you've interpreted, it was referencing this video specifically(i.e showing how voxels can trivialise the programming of advanced motion effects): http://www.youtube.com/watch?v=fSEwU5IqZ4A


I thought I'd read the whole article since some are still commenting on it. First of all, if someone were to post an article on HN about a new technology that gets recursive compression (you can compress something 80%, and then do that again and again) because "they have a new algorithm", they would be laughed off the front page. Some people just don't know that that is impossible. Is this is what is being claimed here? There are three ways to get "unlimited detail" in computer worlds. 1) magical compression. 2) algorithmic generation 3) hierarchical composition, and as we know, 1 doesnt exist.

Looking at the article, this is clearly using method 3. Lets look at this "new" hierarchical composition. Hiearchical composition is when any given space is composed of smaller objects, and those smaller objects may also be composed of smaller objects.

When you do this in 2D games from 1980, we use the technical term "tiling the crap out of everything". You have a bunch of obviously repetitive pieces. If each tile is 16x16, and there are 256x256 tiles on a map, then one could say that there are 16 million pixels in the world, on a computer that had only 1Mb of memory.

So that right there is your clue that this guy is a fraud. 42 trillion voxels? Are they 42 trillion unique voxels, or are they tiled? [1] Right. (a "voxel" is the 3d equivalent of a 2D pixel).

The thing about tiling is it doesn't just apply to pixels or voxels. Tiling is just a spatial subdivision algorithm that splits space up in to 2d or 3d grids. You can store a color in each grid cell, and then its a voxel. But you can just as easily store the head of a linked list of polygons. And you can also store a pointer to another tile array, and that tile array can then store voxels, or polygons, or more tile arrays. When you do this, its a hierarchical data structure.

Do games/graphics programmers know all about this? Yes. Do we use hierarchical composition? Yes! All the time! Do we use axis-aligned hierarchies? Yes, e.g. octrees? Do we use regular grid composition all the way down? Fuck no. Why not? Because splitting up space along predetermined, regular, axis-aligned divisions is fucking awful for modeling interesting 3D worlds [3]. What you get is a brick world: [2]. Such rigid, regular, axis aligned hierarchy fits the real world poorly.

[1] http://media1.gameinformer.com/imagefeed/featured/gameinform... [2] http://media1.gameinformer.com/imagefeed/featured/gameinform... [3] Ok, minecraft being the exception =)


Minecraft demonstrates that (despite its possibly-inefficient coding) unique voxel based geometry is cpu heavy, and to pretend otherwise is fraud. From the outset it's obvious that recursion is affording the 'infinite' tag for this technology.

Interestingly this technology or even this idea isn't anything new. A nice way of summarising it is 3d fractals, sure it's infinite and richly detailed. But it's the same thing over and over again.


No it doesn't. It demonstrates that Minecraft's solution is heavy. Minecraft is a cellular automaton. That's why its slow. It would be a mistake to believe that minecraft demonstrates the effectiveness of voxel technology.


Try to reply with examples(and when you do avoid choosing examples that follow confirmation bias) instead of broad-unbased refutations.

Minecraft clearly demonstrates the comparitive heaviness of using voxel arrays to represent a world in contrast to a polygon mesh. This point is trivial, and is not unique to Minecraft.

Additionally I'm finding a large number of comments that begin with "No it doesn't", then either stating an identical argument to the parent, or not attempting to refute the parent comment at all. If you're doing this for "points" then shame on you.


What I don't like with "Unlimited Details" is that Dell refuses to say what the engine can't do yet. We know it's a work in progress. We (the internet) are excited about it because it's an experiment that's on scale with what we could expect from that branch of 3D graphics technology. Trillions of atoms, whatever. Show us what's still in progress because the internet is skeptical about this, this and this.

Hacking reflexion by duplicating the scene is a good way to start, because no one can say whether it's a good or a bad solution to a non-trivial problem - it gets the job done for now - but can you have multiple coexisting versions of the world? like a vertical mirror, a horizontal body of water and an underwater section?

To Dell: just stop handwaving the questions already, we get it that you ignored the state of the art and built your own thing, just tell us what it can't do yet, show us your current progress and you'll be met with much less skepticism, and even help, even if you keep your trade secrets, at least show us your specs! You won't be struck down by lightning if you talk - no one ever died from reinventing/copying the wheel and making it so good that the wheel can fly.


> Dell refuses to say what the engine can't do yet.

I'll go out on a limb here, and suggest it can't do real-time animation, texturing, lighting, shadows, collisions, etc. All the maps / screenshots shown seem to be static, procedurally generated and not artist-created.

Dell keeps throwing around this 'Unlimited Detail' marketing phrase which is very offputting. Until it's seen in action in an actual game it can be nothing but snake oil.


The article says that polygon objects can exist in the engine and interact with the "atoms". The polygon objects can be animated, fly around, cast shadows and interact with the rest of the world, but the article did not say anything about the atoms being animated. So I don't think you're going out on a limb, especially considering what other people (like Notch) have said on the topic. It looks like the atoms in the engine are strictly static.


I agree with you 100% - my comment was mostly wishful thinking, a description of how Dell could make me change my mind.

This kind of marketing is one I can't quite understand. I mean, what is he after? Buzz? Funding? What's his endgame? Why & how is he paying people to work on it? Is his marketing strategy to troll the profession & the audience, and he really has something big up his sleeve?

Why is he using the Vaporware approach?


Generate enough press to keep funding going, particularly from the Australian government? They already got one such grant, I believe. Are more possible if they can put together enough clippings?


See also the gigavoxel research by an ex-colleague of mine: http://maverick.inria.fr/Members/Cyril.Crassin/ . This looks very similar, and is probably an extension of it...


As soon as I see animated objects moving about in a dynamically lit world, I will start to believe that Euclideon is on to something.

Maybe there could be some middle ground like in the old voxel days, where you would have a static (unlimited detail) background world and some traditional, polygon-based actors in the foreground. Looking at any modern game, just about everything on the screen is constantly moving, so color me sceptical even on that idea.

Also, I would like to know how they do lighting. It looks like they might use precomputed highlights and shadows. Needless to say that this would not be of much use for dynamic lighting.


This may not be as big an issue as you think. We had the same issue in in the Crash Bandicoot games: the background was rendered a completely different way than the foreground (animating) elements. We made it work by approximating where the foreground elements should "sort in" to the background polygon layers. Where the heuristics were wrong, we tuned it manually, by pushing a foreground element forward or backward in the scene until it looked right.

Remember: you can hack stuff in games until it looks right. It doesn't actually have to work perfectly from a theoretical standpoint; it just has to work practically without too much additional tuning labor.


That still does not solve the issue that current games have a lot of moving assets and really not that much static geometry. Trees, shrubs and grass move in the wind, water flows, walls crumble when hit by bullets, that kind of thing. Everything is animated.


It seems to me that voxel worlds could make this problem easier, not harder: you can deform the world algorithmically, voxel by voxel, rather than using polygonal approximations. Imagine an acid blob eating an outdoor environment in a fantasy game: in a voxel world, this is like a fancy seed fill. In a polygonal world, this is much harder to simulate.

I'm not saying voxel worlds solve every such problem; just that there are likely to be as many things that are easier w.r.t. animating elements as there are things that will be harder.

[Edit: maybe I'm not explaining this well, but I guess what I'm saying is that I don't see any reason why any voxels in a voxel world have to be static. The data structures don't force that; in contrast, in a voxel world with a clever data spatial structure, every single voxel can be a dynamic, particulate object, subject to computing power. As I argue above, though, this is not practical until computing power improves enough. Perhaps this is your fundamental point, in which case we're in violent agreement.]


Not sure if you want to SEE it. But in the article it says they are doing just that.

.

"Dell shows me an as yet unreleased demo – not real-time in this case – that suggests Euclideon has already made some positive inroads in ensuring Unlimited Detail is compatible with existing middleware. The short video shows traditional polygon objects happily coexisting and interacting in the same space as a small Unlimited Detail environment"


Dammit, we spent years working toward a unified lighting model, and now we want to split it apart again?!


Collision detection is the big thing for me, sure you can fake it with invisible polygons but that usually leads to weirdness in the player experience.



I am surprisd by the lack of interest in the search algorithm Dell proposes he designed and is using.

Searching such a large problem space for 1 to 2 million results 25x a second is amazingly impressive... This is what i am most curious about at the moment... Also how he is storing the full voxel point data for any given world that needs to be searchd in real time. Replicated data or not (i.e. similar to GIF color data deduplication) you still have location data for every voxel position or offsets or something that still results in a hellacious amount of data that needs to be searched efficiently.

Havent seen details from Dell or others on either of these aspects that I feel are cornerstones to the engine.


Its hierarchical data, most nodes of which are empty at the highest level, or 100% solid, terminating the search. Its basically a specific optimization of raytracing as far as I can tell.


Hmm, what's so amazing about it? It's a voxel graphic engine, of course it's going to have great detail and no polygons (duh). That's not new. Commanche had this in 1992...

I'm not at all an expert in this area, can someone who is explain what the downsides of voxel graphics are? There must be some serious problems with it, because the technology is well known.

Is there something new that makes this particular engine stand out above the previous engines?


The big downside is that you have no idea ahead of time what parts end up on screen and which parts don't. With a voxel engine you create a giant tree that allows you to do a quick logarithmic lookup to a voxel for every pixel on the screen (more or less). Suppose you're standing on a hill and look down on the valley below. If the terrain and world below is created by artists and contains many unique objects then you end up with a voxel tree that's terabytes large. You can't keep it in memory, so performance will be awful. The only way (afaik) to keep the memory usage in check is to force reuse everywhere. So you then create ultra high resolution models of 100 different bricks and mortar. The artist then composes those "lego pieces" into different buildings. That would work, but then everything is going to look very repetitive again (which brings us back to square one).

Then of course there are the problems with lighting, animation and so on.


It's actually the reverse! Voxels make it easy to figure out what's on screen and what's not. That's their main advantage over polygons. The main disadvantage is that they are generally slower to render, but as you add more details, polygon renderers waste time on details that are far away or off screen while voxel renderers can discard those more easily, and eventually there's a tipping point.

You're right that lighting and animation are real problems with voxels. Animation in particular is pretty much impossible, which means you have to fall back to polygons for anything that moves or changes, which in modern games is a lot of stuff.


> It's actually the reverse! Voxels make it easy to figure out what's on screen and what's not.

No no, I explicitly said "ahead of time". It's easy to determine what's on screen when you're about to render the frame, but you have to load every single object in memory because every single object is a "candidate" that may end up on the screen, you just don't know ahead of time. You can't cull the voxel tree, even though 99% of the voxel tree won't be used when rending a frame. So the memory overhead is immense.

Voxel trees don't make it easy to discard things that are far away at all, because there aren't any low-resolution objects that you can use for far-away objects. You have to load many objects in memory even when you use them to render just a pixel or two.

The polygon-based worlds we have today use various low-resolution models for when you view at a distance, and as you get closer the higher resolution models "pop in". This tech advertises that you have infinite detail and that there's no need for objects with different level of detail. This means that the complete point cloud of every object has to be loaded in memory and that's a very real problem and it's in complete contradiction to the claims made by the company that artists can create infinitely detailed worlds without having to worry about vertex counts or whatever. Artists have to compose their world out of re-usable high-detail objects or the sparse voxel octree will be way too large. This means the artists will be much more constrained than they are today, not less.


You're wrong. You don't need to know what's visible ahead of time because you can load the tree lazily on demand. Voxels lend themselves naturally to perfect LOD scaling in a way exactly analogous to how mipmaps and megatextures work for 2D textures. There is no analogous algorithm for polygons; polygon mesh LOD is a very hard problem that is solved only imperfectly today and with a huge labor overhead for both artists and programmers. John Carmack himself has proposed voxels as a solution to this problem: http://www.pcper.com/reviews/Graphics-Cards/John-Carmack-id-...


Can't you still have a normal octree or something with segments of the voxels split by their spatial location with voxel subtrees inside of that? I don't understand why you can't apply spatial division techniques to voxels.


I don't think it would accomplish much. But sure, you can precompute which objects are visible in which circumstances and do some culling based on that data.

My understanding is that the problem is that there's just too much stuff that you have to keep in memory if you don't use LoD to distinguish between nearby and far away objects.

Consider this screenshot from rage: http://pcmedia.ign.com/pc/image/article/116/1162072/rage-201...

What are you going to cull? There's just too much stuff to load in memory. Too much stuff that actually ends up on your screen. Too much point cloud data. Now games use low resolution models and prerendered billboards for far away objects. With a voxel octree you're just SOL, no matter how clever your spatial optimizations.

Unless hard drives (or SSDs) become a few orders of magnitude faster I don't see how the data can be fetched from disk quickly enough.


I think you can, and I think they do; but that still doesn't solve the memory issue, actually it makes it worse since now you have to store multiple 'resolution' versions for the same data.

But if they still need to use 'traditional' techniques like LoD, rendering at various resolutions, etc, then the complexity and running speed can't be all that much faster than a traditional approach. The way I've been reading these articles is that somehow he's got a very fast method of deciding which voxels to show, without any pre-processing or optimisation - just brute force. I'm curious to see the first demo they'll come up with, especially since they only now seem to be at a point where they can render without shading, animation etc, and they make it seem as if those are just implementation details that can be added on an afternoon. We'll see.


To be accurate, Commanche didn't have a true volumetric renderer, it was just raycasting a heightmap[1]; essentially just as much of a hollow shell as polygon rendering techniques. However, you are correct to point out that this tech is not particularly revolutionary; others such as the Atomontage engine[2] show voxels can be leveraged to create detailed, destructible organic environments.

[1] http://www.flipcode.com/archives/Realtime_Voxel_Landscape_En...

[2] http://www.youtube.com/watch?v=_CCZIBDt1uM


I think the major downside is the world is immutable. For the search algorithm to run in short enough times to render in real time the point cloud needs to be organised very carefully.

Part of the point cloud needs to move for animation, which involves re-indexing at least part of the world. Doing this once per frame is probably far too expensive (at least currently), which is why animation doesn't feature in their videos.

That doesn't mean this technology should be dismissed; most world data in 3D scene graphs is static, so it does have applications. I don't see it overtaking rasterised graphics entirely, but I think there is value in it.

I'm not an expert either, so take my comment with a healthy bucket of salt.


notch posted something about downsides of voxel. Like you can't see the same object from different angles, you have a hard time animating things...


I can see how animations might be difficult, depending on what sort of data structure one uses to store the voxel data. But why would you not be able to see the same object from different angles?


I think he means that you cannot rotate an object and keep it aligned to the voxel grid at the same time. It would be a lossy operation, and require interpolation, just like rotating an image on a 2D grid.


But if the resolution of your grid is smaller than the size of the voxel, that wouldn't matter much - and you could keep rounding errors in check by anchoring larger units of voxels at a certain location in space. I can see the drawbacks of the voxel cloud approach (requires much memory, what algorithm is fast enough to do voxel culling on such a massive amount of data, what about animation) but the 'can't do rotation' doesn't seem like a particularly strong argument (unless I'm misunderstanding the actual argument, which is very well possible...)


I'm not saying it is entirely impossible. However, rotating volume data (which voxels are) is computationally a lot more intensive. With polygonal objects you can just change the transformation matrix and re-render the screen and you're done. At most, you have to deform the vertices that make up the outer shell of the object to do things like make a person walk.

With voxels, you'd have to recompute all the voxels of the object on a 3d grid. Also there are issues with ragged boundaries, which can be prevented by using antialiasing, but which is probably trickier (=more computationally intensive) than in 2D...


I'm still not convinced by these arguments ;)

The displacement of the voxels can also be done with a simple matrix transformation, as far as I see it; same like you'd do for the nodes in a 3d model, same as you'd do it with a 'traditional' 3d pipeline (OpenGL/DirectX) - unless I'm behind the curve and animation itself is done on the GPU nowadays, but I don't think it is.

Ragged boundaries wouldn't be an issue as long as there are enough voxels. What I understand, that's the whole point - you just have lots and lots of voxels, at a much finer resolution than you'd ever want to render at, so that you just (by brute force) render once without having to worry about anti-aliasing, gaps between planes etc.


The displacement of the voxels can also be done with a simple matrix transformation, as far as I see it

Yes, but the point is that you have to displace much more voxels than you'd have to displace vertices otherwise. Vertices are only on the boundaries of the object (and quite sparsely at that if you use normal mapping shaders etc wisely) but voxels fill the entire object densely...


> but voxels fill the entire object densely

I was under the impression that they fill the object's hull


I guess you could do that, make the objects hollow. But that will obliterate the "low hanging" advantages of voxels such as destructable terrains, being able to cut things in pieces, realistic physics simulation etc...


Rotatin (in general - transforming) voxels in real time is difficult compared to polygons, because with polygons you only transform vertices, and in voxels you have to transform every voxel.

EDIT: If you are spekaing about preprocessing data - yes - you can rotate as much as you want, the only problem will be that object instanced 1000 times with different orientation/scale will take 1000 times more memory than object instanced 1000 times with the same orientation/scale. With polygons there is no difference.


Sorry, by 'preprocessing' I meant things still done on each frame, but anything that is not raw pushing-vertices-into-the-gpu-pipeline, i.e. all polygon culling, LoD simplification etc.

But yeah, the amount of data you'd have to work on would be much greater, but the way I interpret all of this is that he's found algos to do that very fast, regardless of whether it's a lot of data.


I think the big deal is supposed to be the incredible level of detail apparently achieved without being all that taxing on the hardware (though the jury is out on whether it's some sort of trickery - it sounds too good to be true).

Edit: They claim in the article to have devised an approach that no-one else has come up with, and they give this as the reason why no-one else is doing something similar.


Comanche had shitty detail, as did all the games that came after (Delta Force, etc.) that used the same engine. With this you can zoom down to the pebble.

Supposing this is legit, I'm cautiously optimistic, homie has probably uncovered some very novel techniques that allow the granular level of detail you couldn't do before with voxels, with optimizations that don't require GPU .. this is pretty amazing.


Comanche had shitty detail because it was running on 19 year old hardware. Scale that up 10,000x and refine the algorithms over a couple decades and Dell's demo is the logical result.


Well, see above, it actually wasn't true voxels in this sense.


I think the big deal is that it's supposed to convert poligon based graphics to voxels in real time. Which I doubt it can actually do, but if it can, it would be kind of cool.


the polygon to voxel conversion is just so artists can use existing tools like 3dmax or maya, which work on polygon models. think of the trees in their demo, the models have to come from somewhere. so realtime conversion is not needed.

as far as i can tell their voxel tech is just for non-moving objects like the landscape, buildings, etc. animated and moving actors are still rendered as polygons and blended into the scene (according to the article, this already works to a certain degree, but they weren't able to give a live demo, just a video).


Their polygon-to-voxel technology is a sideline, I believe, and it doesn't sound like it's realtime.


Is something like "GPUs used to be fighting one another for more power, memory, and so forth, but now they have their languages like _Kuda_" (Page 6) just a random typo or a reason to believe that someone didn't do his homework before typing this down?

I'd love to see this released, but the article was far too positive and the tone read too much like marketing to me. Some careful, not too aggressive words at the introduction and blessing after blessing afterwards, sprinkled with the seemingly unbiased author's impression and description of a very nice and professional guy. Mhhhh...

Edit: Just saw that user 'Causification' already said something similar, albeit with a good amount of more emotion. Still, I'm going to keep this here as a personal impression of someone that has no clue about graphic engines in general, i.e. my layman's reaction after reading the article.


Those of us who don’t keep showdead turned on can’t see what Causification has said, because he/she is hellbanned. I was quite confused by your comment, till I guessed what had happened and turned on showdead to check.


What do people make of the patent history? Eight lapsed applications and one withdrawn over a 15 year period. Is that unusual?

http://pericles.ipaustralia.gov.au/ols/auspat/quickSearch.do...


I'd really like to know how much memory is being used per unique model. Until we see more than a couple different models being re-used everywhere, I'm going to stay skeptical. Also his reasoning for the lack of variety of '3 weeks before Gamescom' sounds a bit BS to me, as this video has been around for much longer than Euclideon's last media-spree: http://www.youtube.com/watch?v=KSvptZCJGyI. You would think they would have created at least one demo with a larger variety of objects by this point.

I'm also very suspicious of all this recent positive coverage of Euclideon. There was a pretty suspect feature on Euclideon on an Australian game review show called Good Game a couple weeks back, talking with this same guy, and not really making much mention of the skepticism involved (or at all if I remember correctly), and as far as I know no outspoken skeptics have been able to get their hands on it. The clip from Good Game can be seen here: http://www.youtube.com/watch?v=f_ndZ8ETbqU


The fact they are financially backed by the Australian Government (which has an approximately 0% track record as a VC) seals their doom. If they were in the US they'd have to figure out a way to produce something commercial or at least potentially commercial, now they can burn through $2M while fading into irrelevance (if not there already).


Voxel engines are "new" in the same sense that social networks are "new" from the POV of someone who remembers using modem-based BBSes and Unix Usenet groups back in the 80's. Meaning: mostly old, maybe new in some small subtle corner aspect of it, or some twist. But not really new.


The awesome Aussie show 'Good Game' did a story on Unlimited Detail recently: http://www.abc.net.au/tv/goodgame/video/default.htm?src=/tv/...


After reading all the comments it seems there are people who are playing with this kind of technology. Is there any open source code which can be studied?


If this software is a hoax -- and fair enough, it sounds too good to be true -- can someone explain how they were able to fake it in the live demo? I understand that this is supposed to be impossible, cranks often claim the establishment is 'suppressing' them, etc., etc., but if this snake oil peddling, then HOW did they do it? The demo LOOKS real.


There's supposed to be 42 trillion "voxels" in that demo.

If each "voxel" took only a single bit to store, this works out to require [42 trillion / (1024 * 1024 * 1024 * 8) = 4 889.44352] gigabytes of memory.

How do you store that while you calculate the screen pixels? Modern PCs only have 4 or 8 gigabytes of RAM!


This is addressed in the article: they used many replicated objects. They stored one rock, then rendered it a gajillion times. They claim they were able to do this by applying something like a search algorithm to the objects being rendered.

That's the magic of it -- they are able to create worlds with insane amounts of detail, but in an intelligent way that means they don't need an insane amount of computational power to render it.


Whether or not a voxel engine is the right way go, or even whether it does what it claims to do, I think Dell should be commended for actually going out and trying to do the thing - it's an amazing achievement to actually get somewhere with it.


way to write a lot about it without really talking about it... tl;dr anyone?


tl;dr: i'm sceptical, this might be snake oil, omgthisisthegreatestthingeverandwillrevolutionizeeverything!

honestly, the article is not of the highest quality.


"... he was forced to solve the riddles himself, rather than plucking the accepted solution from a textbook ..."

The thing is, the textbooks also have all the things that didn't work or don't work well. Even some educated graphics programmers often fail to understand that representing 3D objects is just a data representation and transformation problem. Sure, a hard one, but there's nothing magic about it. Representing the data as a bag of "atoms" instead of polygons just isn't a breakthrough. The problem is still spacial search. The problem is still a simulation of optical physics. Graphics is just an optimization problem now. Attend an IEEE conference and maybe 5% of papers will be new theory, the rest will be about effective optimization techniques.

When someone comes to you and says "I have this awesome idea because I didn't read anybody else's ideas", just walk away.


Interesting article. Judging by Dell's (the CEO) calm attitude towards "haters", I personally believe that he truly has something revolutionary to offer.

Only time will tell though.


I think Dell makes money the same way other scammers make money, produce as much hype as possible with half truths, and by the time people figure out what is real, you'll have already cashed in.

I guarantee he is optimizing his entire company for fancy demos. He'll sucker some idiot big company into buying his technology, then we'll never ever see it in a product.

These things are predictable as clockwork.


Most likely. They already got a $2million innovation grant from the government. Now they're working on better shadows and fancier worlds... mere demo stuff.


Mmmm, and what if it's real? What if he earns even more that what he already has because people and companies alike think he's a fraud?

All I can say is good for Dell.


How is that indicative of anything but a thought-out publicity strategy?

If one were in the perpetual-motion machine business, taking a calm attitude towards haters would be pretty much the only option, since there would be no way to argue with them.


... and if he were not in the perpetual-motion machine business taking a calm attitude wouldn't be a bad idea either. So logically it's not a very informative data point.

Nevertheless, I did watch him interviewed and give the real-time interactivity demo to an AU gamer mag a few months ago (I think it's still on Youtube).

I know a bit about the traditional polygon rendering pipeline and I was quite impressed with his response to his critics.

I'm reserving judgment since there are still big unanswered questions. But I would not expect him to be satisfying my curiosity with answers if his strategy is a legitimate attempt to develop the technology with a bigger partner (e.g. a console company or first-party studio).


As someone who knows more than 'a bit', I can tell you, it smells like snake oil from miles away. All the obvious signs are there: needlessly bashing existing tech, handwaving around very hard problems and demo videos that are staged to hide the obvious flaws from the untrained eye.

Plus, they've been hyping up the same tech for years now, with very little progress. People keep throwing the same criticisms at them, and they keep delivering very unimpressive responses.

If they've indeed scored $2 million of government grants already with this demo, as others allege, then it couldn't be more obvious: it's a zombie company designed to look impressive and leech off investor money, nothing more. It's centered around tech that has already been explored thoroughly by academia and industry, and we know where the practical problems lie in turning it into a viable game tech.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: