Bug 25048 - Texture Memory Leak
Summary: Texture Memory Leak
Status: RESOLVED FIXED
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/DRI/i965 (show other bugs)
Version: 7.6
Hardware: Other All
: medium normal
Assignee: Eric Anholt
QA Contact:
URL:
Whiteboard:
Keywords: NEEDINFO
Depends on:
Blocks:
 
Reported: 2009-11-11 19:20 UTC by Sander Jansen
Modified: 2011-01-04 13:21 UTC (History)
0 users

See Also:
i915 platform:
i915 features:


Attachments
Trace: Creating & Using & Destroying 5 textures (182.08 KB, text/plain)
2009-11-17 09:55 UTC, Sander Jansen
Details
Trace: Quiting program from debug_part_1 (1.93 KB, text/plain)
2009-11-17 10:00 UTC, Sander Jansen
Details
Code leaking texture memory. (5.35 KB, application/octet-stream)
2009-12-10 15:19 UTC, Sander Jansen
Details
streaming-texture-leak fails with texture size of 2048 (not OOM related) (13.80 KB, text/plain)
2010-06-04 12:27 UTC, Sander Jansen
Details

Description Sander Jansen 2009-11-11 19:20:20 UTC
In my music manager program I'm using OpenGL to display album covers using simple textures. I've noticed that the memory usage keeps increases over time. Specifically it increases everytime I load a new album cover. You will only need to listen to a couple of songs and the memory use will have increased significantly since most album covers are about 500x500 pixels (RGB). I'm suspecting the intel driver doesn't free the texture memory. Here's what I do: 

1) Delete old texture (glDeleteTextures)
2) Create new texture (glGenTexture)
3) Fill texture with new image data (glBindTexture,glTexImage2d, glTexSubImage2d etc)
4) Display

Now if I keep reusing the same texture, (only use glGenTexture once), the memory usage still keeps increasing, but at a much slower pace than before. If I disable hardware accelleration (using driinfo) all together, the memory usage stays nice and flat. 

I'm using Mesa 7.6 and Intel 2.9.1 using KMS.
Comment 1 Eric Anholt 2009-11-16 17:49:07 UTC
Can you provide a minimal testcase that we can integrate into glean to show off the problem?
Comment 2 Sander Jansen 2009-11-16 18:12:08 UTC
(In reply to comment #1)
> Can you provide a minimal testcase that we can integrate into glean to show off
> the problem?
>

I think so. It should be pretty trivial (creating/using/destroying textures in a loop).

http://sourceforge.net/projects/glean/files doesn't seem to provide any file. Where can I find the latest glean?   

Comment 3 Sander Jansen 2009-11-17 09:55:44 UTC
Created attachment 31272 [details]
Trace: Creating & Using & Destroying 5 textures

Here you can see my program:

1) creating texture
2) using it
3) destroying it.

I'm doing this 5 times. I've added "[GMM] glGenTextures" and "[GMM] glDestroyTextures" to indicate where I call those functions. Observed behaviour My programs memory keeps on increasing its memory size. 

See the next attachment to see what happens when I quit my program and destroy the context.
Comment 4 Sander Jansen 2009-11-17 10:00:41 UTC
Created attachment 31273 [details]
Trace: Quiting program from debug_part_1

Here I quite the program from debug_part_1. The opengl context gets destroyed. Valgrind doesn't report any memory lost. I don't know the intel driver code very well, but some of the "bo_unreference final" calls seems rather late. For example, a bo gets created when I've created a texture:

[GMM] glGenTextures
intelNewTextureObject
intelNewTextureImage
intelTexImage target GL_TEXTURE_2D level 0 500x500x1 border 0
guess_and_alloc_mipmap_tree
intel_miptree_create_internal target GL_TEXTURE_2D format GL_RGB level 0..0 <-- 0x11c2a00
intel_miptree_set_level_info level 0 size: 500,500,1 offset 0,0 (0x0)
brw_miptree_layout: 512x500x4 - sz 0xfa000
bo_create: buf 33 (region) 1024000b

But it only gets unreferenced when the context gets destroyed:

bo_unreference final: 38 (SS_SURF_BIND)
bo_unreference final: 34 (SS_SURF_BIND)
bo_unreference final: 33 (region)
Comment 5 Eric Anholt 2009-12-10 14:31:51 UTC
Very sorry for saying glean -- that was a mistake.  piglit is the testsuite that we use, and I've made a testcase (streaming_texture_leak) there before to blow up with particular texturing leaks before, by looping on a create/destroy cycle that leaked until the system oomed.

http://people.freedesktop.org/~nh/piglit/

(still NEEDINFO -- the traces aren't helpful without the code)
Comment 6 Sander Jansen 2009-12-10 15:19:34 UTC
Created attachment 31955 [details]
Code leaking texture memory.

The GMImageView::updateTexture(FXImage * image) is called whenever the textures gets refreshed. onPaint does the repainting.

You can also see the current code in the source repository:
http://code.google.com/p/gogglesmm/source/browse/trunk/src/GMImageView.cpp

I tried the test case. I see memory consumption going up and down between 200mb and 2.0GB. (so obviously at some point it does clear some memory). However since my machine contains 4GB of ram, the test always passes since the oom-killer is never needed. Setting the TEXTURE_SIZE to 2048 will make the test case "succesfully" fail.
Comment 7 Sander Jansen 2009-12-12 09:53:03 UTC
Just like to note that I see the same problem on my desktop pc with an intel G45 graphics chip.
Comment 8 Eric Anholt 2010-01-06 10:46:23 UTC
How many textures are involved?  What sizes?
Comment 9 Sander Jansen 2010-01-06 11:16:30 UTC
(In reply to comment #8)
> How many textures are involved?  What sizes?
> 

I'm not sure if I understand what you're asking. My viewer displays cover art and only uses 1 texture to display it. So 1 texture is involved. When changing the display the texture gets deleted [ideally] and a new texture is created and used to display the new image. As I said before, even when re-using the texture, the memory increases over time (albeit at a much slower pace).

Typically texture is a about 500x500 pixels.
Comment 10 Sander Jansen 2010-06-04 12:26:18 UTC
Good News!

On Arch Linux with following software versions installed:

kernel26 2.6.34-1
libdrm 2.4.20-3
mesa 7.8.1-3
intel-dri 7.8.1-3
xf86-video-intel 2.11.0-2

Running the piglet streaming-texture-leak with texture sizes of 1024 and 2048, the memory consumption is very stable (and barely noticable at 10mb / 22mb) and the OOM is never needed. So it looks like the leaking texture memory is fixed.

The _only_ problem I encountered, is that with a texture size of 2048 the test itself still fails when it tries to read back the pixel value. With 1024 texture size the test passes. I'll attach the summary for the failed test.
Comment 11 Sander Jansen 2010-06-04 12:27:34 UTC
Created attachment 36062 [details]
streaming-texture-leak fails with texture size of 2048 (not OOM related)
Comment 12 Eric Anholt 2011-01-04 13:21:39 UTC
OK, sounds like the original problem was fixed now (I think it was the deleting of cached objects referencing dead regions, but if not then at least the state batching of the binding table killed it off).


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.