Texture to Vertices method is constructing flat surfaces

Topics: Developer Forum, Project Management Forum, User Forum
Feb 18, 2011 at 2:48 PM
Edited Feb 18, 2011 at 10:22 PM

I am using the 'Textures to Vertices' technique to construct Bodies, but I'm getting flat surfaces for most of the textures.

The following is the code I'm using to get the collection of vertices for a given Texture (based on Demo1 from AdvancedSamplesXNA):

        static List<Vertices> ToVertices(Texture2D texture, out Vector2 origin)
            //Create an array to hold the data from the texture
            uint[] data = new uint[texture.Width * texture.Height];

            //Transfer the texture data to the array

            Vertices textureVertices = PolygonTools.CreatePolygon(data, texture.Width, false);

            //The tool return vertices as they were found in the texture.
            //We need to find the real center (centroid) of the vertices for 2 reasons:

            //1. To translate the vertices so the polygon is centered around the centroid.
            var centroid = -textureVertices.GetCentroid();
            textureVertices.Translate(ref centroid);

            //2. To draw the texture the correct place.
            origin = -centroid;

            //We simplify the vertices found in the texture.
            textureVertices = SimplifyTools.ReduceByDistance(textureVertices, 4f);

            //Since it is a concave polygon, we need to partition it into several smaller convex polygons
            List<Vertices> list = BayazitDecomposer.ConvexPartition(textureVertices);

            //Now we need to scale the vertices (result is in pixels, we use meters)
            //At the same time we flip the y-axis.
            var scale = new Vector2(0.01f, -0.01f);

            foreach (Vertices vertices in list)
                vertices.Scale(ref scale);

                //When we flip the y-axis, the orientation can change.
                //We need to remember that FPE works with CCW polygons only.
            return list;

Usage would then be something as follows:

List<Vertices> vertices = ToVertices(texture, out origin);
Body body = BodyFactory.CreateCompoundPolygon(world, vertices, density);

That is how I am creating all of the Bodies for my game.

The following are the number of elements (vertices) contained in variable 'textureVertices' after using PolygonTools.CreatePolygon:

Black Terrain: 126
Sphere: 20
Green Square: 20
Platforms: 36 

The problem I am now facing is that the polygons that are constructed are far from adequate when it comes to collision detection:

The 3 brownish platforms, the ball, the green square and the black terrain are all physics objects built with the above code. 

The only collision detection that works properly is between the ball and the green square.  Notice how there is a small gap between the ball and the platform underneath it; it's like according to the physics engine, the platform is a flat rectangle when it clearly isn't.

Notice also how the collision detection between the green square and the terrain; it's completely off.

The last image shows how the black terrain is seen as a flat surface by the engine.  Both the ball and the green box slide across the terrain completely disregarding the 'obstacles' that are in the way.

Anyone have ideas as regards this problem?

The following are the corresponding images:


Thanks for taking the time to read this question and any help would be highly appreciated.


Feb 19, 2011 at 8:45 PM

I would recommend you setup the DebugViewXNA project inside your game to see what is really going on. The polygons might just be offset or you need to tweak some of the texture-to-vertices parameters to better fit your textures.

I would also recommend you use the texture-to-vertices tool in the design process and not live in your game as it almost always produce sub-optimal results for the engine.