Pixel Collision in MDX

EFileTahi-A

Contributor
Joined
Aug 8, 2004
Location
Portugal / Barreiro
I've opened a thread regarding this like 2 years ago in which I actually got a solution, but it seems now that such solution is not fast enough.

The solution was to get a portion of the rendered backbuffer surface back to a BMP object and than do the pixel color check. Unfortunately, passing a surface to a stream method is way TOO slow.

Is there any other way to access a specific pixel on the backbuffer and retrieve its raw color? Can this be done directly? I can't believe such a simple task is giving me such a hard time.


My work is halted until I find a solution for this.

Thanks, in advance.
 

PlausiblyDamp

Administrator
Joined
Sep 4, 2002
Location
Lancashire, UK
In practice reading from the back buffer will always be a slow operation as it involves dragging bytes from the video card to system memory and this requires GPU / CPU interaction.

It is always going to be better in performance terms to do collision detection as part of the game loop based on the game state rather than relying on the rendered graphics. As part of your game state you should know the position / rotation of all objects and the textures being rendered so you should be able to detect collisions that way.

http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2D/Coll_Detection_Overview.php is part of a bigger tutorial (XNA but the concepts are the same) that deals with collision detection and is well worth a read.
 

snarfblam

Ultimate Contributor
Joined
Jun 10, 2003
Location
USA
Couldn't you draw to a separate render target and directly examine the texture data? (I don't know if this offers any performance benefit over simply resolving the backbuffer, but any way you can look directly at the raw data works.) You can just draw the texture directly to the back-buffer to get it onto the screen. This would eliminate the costly DirectX/GDI+ interop.

On the whole your approach sounds like an iffy proposition. You're depending on the rendered result to implement other logic. This means that the CPU spins idle while the GPU is working, and then the GPU does the same, idling while the CPU works. I don't know how you are doing your rendering, but I'm guessing there exists an algorithm to do your collision detection without making your application overly "CPU-bound". If you are drawing sprites you could convert your screen coordinates to texture coordinates and examine the texture. If you are drawing geometry then you should be able to do the collision detection with a purely mathematical approach.

Shawn Hargreaves has a good article that explains the concept of CPU bound vs. GPU bound programs. The important concept is managing work between the CPU and GPU, and that the less they need to go back and fourth, the better. Ideally the CPU will rarely ever need to hear from the GPU.
 
Last edited:

EFileTahi-A

Contributor
Joined
Aug 8, 2004
Location
Portugal / Barreiro
Thank for for the replies.

Well maybe I'm applying the wrong logic to MDX. My logic is based on GDI+ pixel detection concept where I examine its backbuffer in a certain area where the laser ray is travelling. I scan ahead of the laser's path as much pixels as its predefined speed.

I'm trying to avoid using math formulas as much as I can because I want the ray to be able to hit 1 pixel if required, not also to mention that shapes can go irregular as hell. Math formulas are time consuming. With pixel collision, collision can work anywhere at any circustance once its algorythm is done over a black and white image, where white is collision.

[snarfblam]
"You can just draw the texture directly to the back-buffer to get it onto the screen. This would eliminate the costly DirectX/GDI+ interop."

Could you please explain this a bit further? Sorry, I'm still a noob.


Thank you.
 

PlausiblyDamp

Administrator
Joined
Sep 4, 2002
Location
Lancashire, UK
Math formulas may be time consuming but you can make optimisations to the general idea e.g. rather than doing pixel perfect collision check against a rectangular bounding box and only check the actual pixels if a collision with this rectangle is found.

Reading from the back buffer will always be orders of magnitude slower than the time taken by the video card to render the image itself. As an example I took a screenshot from a game at 1360 x 768 resolution and converted it to a 24bit bitmap - this weighed in at approx 3MB. That means to do collision detection using the back buffer on this sized screen you would be dragging approx 3MB per frame out of video memory to system memory. If your game is running at just 25fps that means you are having to move 75M a second out of the video card, on top of all the information being sent to the video card.

Although video cards do provide a means to read the back buffer from them it isn't really the ideal scenario and it is not designed to be a high performance operation.
 

EFileTahi-A

Contributor
Joined
Aug 8, 2004
Location
Portugal / Barreiro
Check the attached image.

Ok, I just got this idea while drawing the picture...

WHAT IF!? When loading the image, I do a full pixel check over the BMP files and store the coordinates for all white pixels in a Point[] array (or so) at that specific tile (x,y)? The pixel data collecting can be done trough my editor so that when the game runs this validation is unecessary.

This way when the ray travels over a certain tile containing collision pixels collision check is triggered with perfect pixel detection.

This will be CPU dependent yes and worst case scenario is 1024 Points[] per tile, or pointing the tile to a predefined pixel collection based on the tile gfx name at a given x,y pos. What do you think? This can't be slower or required a large bandwidth as alocating textures or backbuffers to read data from, right?

Can't wait to hear from you!
 

Attachments

  • MaskTile.png
    464 bytes · Views: 17

PlausiblyDamp

Administrator
Joined
Sep 4, 2002
Location
Lancashire, UK
If you are loading the bitmap into memory it might be worth keeping it in ram and using the image data directly rather than bothering with an array of points - given a known height and width it is pretty easy to convert an xy coordinate to a pixel directly.

You could even optimise the checks a bit more - your image could be divided into about 5 rectangles to hint at the state. e.g. The top and bottom are entirely black to a point so you could just check if the y coordinate is above or below these cut off points and ignore the x coordinate if it is.

Even if this is slow it will still be quicker than dragging the image from the back buffer to do the same kind of checks anyway.
 

EFileTahi-A

Contributor
Joined
Aug 8, 2004
Location
Portugal / Barreiro
If you are loading the bitmap into memory it might be worth keeping it in ram and using the image data directly rather than bothering with an array of points - given a known height and width it is pretty easy to convert an xy coordinate to a pixel directly.

The problem is that I'm not loading the BMP to memory to System's Ram, the BMP are being loaded as textures into the GFX card's Ram. That's what is complicating. I could solve the whole thing by loading the graphics into both GFX card and System ram, but this would consume twice the resources.

Anyway, I will give a try using the Point[] approach. If it is fast enough it will solve the problem for precision as shown in my last post.
 

snarfblam

Ultimate Contributor
Joined
Jun 10, 2003
Location
USA
I could solve the whole thing by loading the graphics into both GFX card and System ram, but this would consume twice the resources.
Most optimizations are a trade-off between memory consumption and CPU usage. In this case, it sounds like a great deal. Lose 3 megs of ram for a 1000% performance boost? The fact of the matter is that you want to access this data from the CPU and the GPU and they both have their own memory. You keep carting it from one to the other anyways, and your app is going to have to keep some RAM handy for the temporary data. I'm guessing that loading the same data in both places is going to benefit you in more ways than you expect.
 
Last edited:

EFileTahi-A

Contributor
Joined
Aug 8, 2004
Location
Portugal / Barreiro
Most optimizations are a trade-off between memory consumption and CPU usage. In this case, it sounds like a great deal. Lose 3 megs of ram for a 1000% performance boost? The fact of the matter is that you want to access this data from the CPU and the GPU and they both have their own memory. You keep carting it from one to the other anyways, and your app is going to have to keep some RAM handy for the temporary data. I'm guessing that loading the same data in both places is going to benefit you in more ways than you expect.

Well, I have to agree with you. I could just load the MASK BMPs (black&white data for collision) and render them separately for collision checks, based on x,y coordinates of the map where the laser ray is traveling.

I guess I was avoiding the poor speed of rendering stuff with GDI+. But rendering a small portion of the map (like 5x5 tiles) around the laser ray will not have impact in the FPS.

Thanks m8 for the post, it did turned out to be QUITE useful.
 
Last edited:
Top Bottom