Bug 6242

Summary: [mach64] Use private DMA buffers (only)
Product: DRI Reporter: George - <fufutos610>
Component: DRM/otherAssignee: Felix Kühling <fxkuehl>
Status: RESOLVED FIXED QA Contact:
Severity: normal    
Priority: high CC: alexdeucher, dri-devel, lrbalt
Version: DRI git   
Hardware: x86 (IA32)   
OS: Linux (All)   
Whiteboard:
i915 platform: i915 features:
Bug Depends on: 6209    
Bug Blocks: 6666    
Attachments:
Description Flags
mach64: DMA buffers for vertices
none
mach64-dri: DMA buffers for vertices
none
mach64-ddx: DMA buffers for vertices
none
mach64-drm: DMA buffers for vertices - try 1
none
mach64-drm: DMA buffers for vertices - try 2
none
private DMA buffers / mach64-ddx
none
private DMA buffers / mach64-dri
none
private DMA buffers / mach64-drm
none
private DMA buffers / mach64-ddx - try 1
none
private DMA buffers / mach64-drm - try 1
none
private DMA buffers / mach64-drm - try 2
none
private DMA buffers / mach64-ddx - try 2
none
private DMA buffers / mach64-dri - try 1
none
private DMA buffers / mach64-ddx - try 3
none
private DMA buffers / mach64-drm - try 3
none
private DMA buffers / mach64-dri - try 2
none
private DMA buffers / mach64-drm - try 4 none

Description George - 2006-03-13 11:56:32 UTC
Here is my first attempt to use DMA buffers for vertex submission.

It depends on https://bugs.freedesktop.org/show_bug.cgi?id=6209

As this is my first attempt to hack on the drm, any comments would be
greatly appreciated.
Comment 1 George - 2006-03-13 11:58:08 UTC
Created attachment 4911 [details] [review]
mach64: DMA buffers for vertices

patch for drm
Comment 2 George - 2006-03-13 11:58:50 UTC
Created attachment 4912 [details] [review]
mach64-dri: DMA buffers for vertices

patch for dri
Comment 3 George - 2006-03-13 11:59:29 UTC
Created attachment 4913 [details] [review]
mach64-ddx: DMA buffers for vertices

patch for ddx
Comment 4 George - 2006-03-14 01:24:07 UTC
Created attachment 4925 [details] [review]
mach64-drm: DMA buffers for vertices - try 1

Set the discard flag of the DMA correctly.
Comment 5 George - 2006-03-30 04:27:06 UTC
Created attachment 5123 [details] [review]
mach64-drm: DMA buffers for vertices - try 2

enable test of register values in the vertex buffer.
Comment 6 Leif Delgass 2006-03-31 13:19:55 UTC
(In reply to comment #5)
> Created an attachment (id=5123) [edit]
> mach64-drm: DMA buffers for vertices - try 2
> 
> enable test of register values in the vertex buffer.

It's good to see this driver getting some attention again, I'm sorry I don't
have time to help out with the code right now.

I wanted to point out that if you go this route (using client DMA buffers for
vertices), you'll need to unmap the buffers from userspace before verifying and
submitting them to make this secure.  If you look at the CVS history, you'll see
that we actually used to use DMA buffers, but changed to the current method when
beginning the work to make the driver secure.  We decided that copying the data
from client memory into a pool of unmapped DMA buffers in the kernel would be
faster than unmapping and re-mapping the buffers for each submit.  Copying the
userspace data into DMA buffers allocated from the mapped pool was a stopgap
until we had a way to allocate unmapped buffers in the kernel for this purpose.

It's been a while since I looked at the code in detail, but the problem at the
time was a limitation of the DRM buffer management infrastructure.  As I recall,
there wasn't an easy way to create two sets of DMA buffers, one mapped to
userspace, and one which wasn't mapped.  You may want to look into the mailing
list archives for more discussion on this.  I think that this was the last
remaining issue in securing the driver.

-Leif
Comment 7 Felix Kuehling 2006-04-01 02:50:40 UTC
(In reply to comment #6)

> It's been a while since I looked at the code in detail, but the problem at the
> time was a limitation of the DRM buffer management infrastructure.  As I recall,
> there wasn't an easy way to create two sets of DMA buffers, one mapped to
> userspace, and one which wasn't mapped.

I did something that sounds like what you're trying to do, in the Savage driver
for command DMA. Instead of using the DMA buffer infractructure I simply created
a map of a chunck of AGP memory that can't be mapped to unprivileged clients.
The DRM uses that map as a command buffer with its own ageing and buffer
management implementation.

I was going to try something like that in the Mach64 driver. Not sure how
feasible this is or if there are any hardware issues that may prevent this.
Comment 8 George - 2006-04-01 06:39:46 UTC
(In reply to comment #6)
> (In reply to comment #5)
> > Created an attachment (id=5123) [edit] [edit]
> > mach64-drm: DMA buffers for vertices - try 2
> > 
> > enable test of register values in the vertex buffer.
> 
> It's good to see this driver getting some attention again, I'm sorry I don't
> have time to help out with the code right now.
> 
> I wanted to point out that if you go this route (using client DMA buffers for
> vertices), you'll need to unmap the buffers from userspace before verifying and
> submitting them to make this secure.  If you look at the CVS history, you'll see
> that we actually used to use DMA buffers, but changed to the current method when
> beginning the work to make the driver secure.  We decided that copying the data
> from client memory into a pool of unmapped DMA buffers in the kernel would be
> faster than unmapping and re-mapping the buffers for each submit.  Copying the
> userspace data into DMA buffers allocated from the mapped pool was a stopgap
> until we had a way to allocate unmapped buffers in the kernel for this purpose.
> 
Grrh, I knew I was missing something basic when I tried to touch this stuff ...
also a bit stupid of me to lurk and search mesa-dev for mach64 DMA instead of
dri-devel.

I'll probably do my "homework" over the weekend. Just a question to make sure I
am not completely mixed up: As it stands now, is drm_blit secure ? I mean a
malicious client can still change the contents of the DMA buffer after
submission to the drm in the case of a blit also, esp. the first
MACH64_HOSTDATA_BLIT_OFFSET bytes.

> It's been a while since I looked at the code in detail, but the problem at the
> time was a limitation of the DRM buffer management infrastructure.  As I recall,
> there wasn't an easy way to create two sets of DMA buffers, one mapped to
> userspace, and one which wasn't mapped.  You may want to look into the mailing
> list archives for more discussion on this.  I think that this was the last
> remaining issue in securing the driver.
> 
> -Leif

Thanks for your comments and your work on open source drivers,
george.
Comment 9 George - 2006-04-04 10:51:56 UTC
What about the following approach:

Do not map the DMA buffers to user-space.

For dma_vertex, nothing changes.

For dma_blit, the client submits a pointer to user-space memory. 
In the AGP case nothing changes since the default method is AGP texturing (in
fact "local_textures" do not currently work with AGP, bug #6209). 
In the PCI case, the simple thing is copy_from_user to a private DMA buffer. If
the performance regression is unacceptable, we can change the blit ioctl to
submit w/h/pitch parameters and turn the memcpy currently done in user-space to
copy_from_user. I presume that its easy to determine that the pointer actually
points to memory owned by the process.

Hopefully, we can reuse the current buffer management routines. If it is
possible to do drmAddBufs without drmMapBufs, then very little changes are
required (I saw a comment in drm_mapbufs that PCI buffers are actually mapped in
drm_addbufs ...).

Sorry if I am waisting your time with uninformed assumptions,
george.
Comment 10 George - 2006-04-10 11:34:59 UTC
(In reply to comment #9)
> What about the following approach:
> 
> Do not map the DMA buffers to user-space.
>
> [...] 
>
> Hopefully, we can reuse the current buffer management routines. If it is
> possible to do drmAddBufs without drmMapBufs, then very little changes are
> required (I saw a comment in drm_mapbufs that PCI buffers are actually mapped
> in drm_addbufs ...).
> 
I'am trying to tackle this by mapping the DMA buffers READ_ONLY. My current
understanding is that PCI DMA buffers do actually get mapped in drm_mapbufs:
drm_mapbufs -> do_mmap(..., 0) -> drm_mmap -> drm_mmap_dma

If this is correct, can someone piggyback the following change at the
description of drm_mapbufs?

 * Maps the AGP, SG or PCI buffer region with do_mmap(), and copies information
 * about each buffer into user space. For PCI buffers, it calls do_mmap() with
 * offset equal to 0, which drm_mmap interpretes as PCI buffers and calls
 * drm_mmap_dma().



Comment 11 George - 2006-04-12 10:44:52 UTC
Start over.

The following patches implement the suggestion at comment 9.

For AGP, they reserve the AGP memory to be used for DMA buffers as READ_ONLY.

For PCI, they make drm_mmap_dma to return as soon as it is called to prevent
mapping of the DMA buffers, this will have to be handled with a flag in
drm_buf_desc_t.

mach64_dma_blit was also converted to submit a pointer to user-space and
subsituted the memcpy currently done in user-space with a copy_from_user (I
presume this is secure or can be done so easily ...). Wrt performace for PCI, I
see a ~12% speedup for some Mesa demos I tested.

Also, the patches contain some other mainly consmetic changes:
o   factor out from mach64_dma_dispatch_vertex the code to reclaim an used
    buffer, it is now used by mach64_dma_dispatch_blit also

o   factor out from mach64_freelist_get the code to reclaim a completed buffer,
    this was to improve readability for me

o   move the memory reservation for the DMA ring in the PCI case to DDX,
    this unifies a couple of PCI/AGP code paths for ring memory in the drm

Hope haven't managed to enter ingore lists with this bug report,
any comments are greatly appreciated,

george.
Comment 12 George - 2006-04-12 10:46:01 UTC
Created attachment 5269 [details] [review]
private DMA buffers / mach64-ddx
Comment 13 George - 2006-04-12 10:46:34 UTC
Created attachment 5270 [details] [review]
private DMA buffers / mach64-dri
Comment 14 George - 2006-04-12 10:47:11 UTC
Created attachment 5271 [details] [review]
private DMA buffers / mach64-drm
Comment 15 George - 2006-04-13 06:24:23 UTC
Created attachment 5283 [details] [review]
private DMA buffers / mach64-ddx - try 1

Add DRM_PCI_READ_ONLY flag in drmBufDescFlags. 

This flag has also to be added at 
o libdrm/xf86drm.h
o xorg/hw/xfree86/os-support/xf86drm.h
Comment 16 George - 2006-04-13 06:27:58 UTC
Created attachment 5284 [details] [review]
private DMA buffers / mach64-drm - try 1

Add DRM_PCI_READ_ONLY flag in drmBufDescFlags.

When drm_mmap_dma sees this flag (set from drmAddBufs), it maps the PCI DMA
buffers as read-only (I just copied the code from drm_mmap).

An additional flag is needed, since PCI DMA buffers do not have an associated
map.
Comment 17 George - 2006-05-24 04:01:03 UTC
Anybody willing to review this ?

Needless to say, I am willing to do any changes or submit in pieces if this is
in the right direction and I am not still confused.
Comment 18 Felix Kuehling 2006-05-25 07:16:16 UTC
I'd like to review your patches. Someone needs to kick my b*tt so I get a card
and try this stuff. I also got a private email from someone interested in
mach64. I suggested to try your patches. Give me a few weekends and don't forget
to remind me every once in a while. ;-)
Comment 19 George - 2006-06-22 08:47:57 UTC
> Give me a few weekends and don't forget
> to remind me every once in a while. ;-)

what are doing this weekend ? do you like reviewing drm patches ;-)
Comment 20 Felix Kühling 2006-06-22 12:51:08 UTC
(In reply to comment #19)
> > Give me a few weekends and don't forget
> > to remind me every once in a while. ;-)
> 
> what are doing this weekend ? do you like reviewing drm patches ;-)

Yep. I haven't forgotten about it yet.

I spent the last few evenings upgrading my home box to Dapper, so I have now
modular Xorg by default. I also installed a PCI Mach64 with 4MB Ram. I'll get
and build the relevant sources from GIT/CVS soon. I can't promise anything for
this weekend though. I have to watch Soccer World Cup on Saturday (Germany
against Sweden) ;-) and Sunday is booked for a company family event.

Assigning to myself and adding dri-devel to the CC list to maintain visibility.
Comment 21 George - 2006-07-16 07:01:09 UTC
Created attachment 6234 [details]
private DMA buffers / mach64-drm - try 2

This is the same as previous version with the mach64 version major bumped.

It is a patchset with different commits for the preparation code, the changes
to the DRM core and lastly the backwards incompatible changes to the mach64
DRM.
Comment 22 George - 2006-07-16 07:02:58 UTC
Created attachment 6235 [details] [review]
private DMA buffers / mach64-ddx - try 2

Require a new mach64 DRM, also free the PCI DMA ring memory.
Comment 23 George - 2006-07-16 07:03:59 UTC
Created attachment 6236 [details] [review]
private DMA buffers / mach64-dri - try 1

Require new mach64 DRM.
Comment 24 George - 2006-07-16 09:38:16 UTC
Created attachment 6241 [details] [review]
private DMA buffers / mach64-ddx - try 3

stupid: check that PCI DMA ring was actually allocated before freeing.
Comment 25 Felix Kühling 2006-07-17 21:17:56 UTC
Still untested, but this looks pretty good. You're not adding any performance
penalty because someone has to copy data into the DMA buffers for blits. You
just moved the copy from user space to kernel space and eliminated an ioctl,
which even improved performance. Neat!

It's also nice that you bundled all changes that affect the DRM interfaces in
one diff.

I have just a few little comments for improvements:
- You said it yourself, mach64_freelist_put should be moved to mach64_dma.c
- Please also bump the driver date in mach64_drv.h
- mach64FireBlitLocked should be able to handle EAGAIN or pass it back to the
caller. This is a new error condition for the blit ioctl that should be handled
more gracefully. This sort of error was handled in mach64GetBufferLocked previously.

Another thing to keep in mind is that you don't have a solution for BSD because
you only implement the DRM_PCI_BUFFER_RO flag for linux. I'm not sure how many
people care about mach64 on BSD. I guess on BSD the flag would simply be
ignored, so PCI DMA buffers would still be mapped for writing.
Comment 26 George - 2006-07-18 09:56:50 UTC
Created attachment 6270 [details]
private DMA buffers / mach64-drm - try 3

- mach64_freelist_put moved to mach64_dma.c
- bump the driver date in mach64_drv.h
Comment 27 George - 2006-07-18 10:00:22 UTC
Created attachment 6271 [details] [review]
private DMA buffers / mach64-dri - try 2

Handle EAGAIN in mach64FireBlitLocked: call drmCommandWrite up to
MACH64_TIMEOUT times when EAGAIN is returned.

Also handle EAGAIN in mach64FlushVerticesLocked.
Comment 28 George - 2006-09-25 13:06:24 UTC
Any chance that this gets reviewed for X.org 7.2 ?

thanks,
george.
Comment 29 Alex Deucher 2006-09-25 13:09:25 UTC
(In reply to comment #28)
> Any chance that this gets reviewed for X.org 7.2 ?
> 
> thanks,
> george.
> 

I just made it a blocker for 7.2
Comment 30 Felix Kühling 2006-09-25 13:51:45 UTC
(In reply to comment #29)
> (In reply to comment #28)
> > Any chance that this gets reviewed for X.org 7.2 ?
> > 
> > thanks,
> > george.
> > 
> 
> I just made it a blocker for 7.2

Sorry. I have no good excuse for neglecting this for so long. My bad conscience
is bugging me and in fact I just started looking at this again last night. Alex,
I suspect you are more fluent in git and more up-to-date on current development
than I am, so feel free to beat me to it.

There are some changes that affect the shared DRM code (not mach64-specific). I
guess Dave Airlie should give his ok before this gets committed. Otherwise, last
time I reviewed this it looked pretty good and George addressed the issues I
pointed out very quickly. So I am confident that this will be OK for 7.2. What's
the planned release date?
Comment 31 George - 2006-09-26 07:14:36 UTC
(In reply to comment #30)
> So I am confident that this will be OK for 7.2. What's
> the planned release date?
> 

http://wiki.x.org/wiki/ReleaseSchedule
Comment 32 Felix Kühling 2006-10-01 16:17:02 UTC
I've gone over your latest patches. They look good to me. I applied them to my
local git trees and my Mesa working copy and tested them briefly on my amd64
with a PCI mach64. They seem to work nicely. I ran into some unrelated problems
with the latest DRM that I want to get resolved first before pushing your
patches upstream. The latest Mesa seems to have some problems too, regarding
texture management. However, I saw those before applying your Mesa patch, too.

I'll probably push your patches beginning of the week.

Good work! And thanks for your patience.
Comment 33 George - 2006-10-02 06:55:08 UTC
Created attachment 7232 [details]
private DMA buffers / mach64-drm - try 4

This is that last one, please push this.

- drop an one-liner for a warning that was fixed otherwise (and the one-liner
  would introduce the warning back)

- add an one-liner for a positive return value (ENOMEM -> DRM_ERR(ENOMEM))

- move the drm_vm.c patch a few lines above so that drm_mmap_dma() does things
  in the same order as drm_mmap()
Comment 34 George - 2006-10-02 07:03:06 UTC
(In reply to comment #32)
> I've gone over your latest patches. They look good to me. I applied them to my
> local git trees and my Mesa working copy and tested them briefly on my amd64
> with a PCI mach64. They seem to work nicely. I ran into some unrelated problems
> with the latest DRM that I want to get resolved first before pushing your
> patches upstream. 

Wrt mach64 IRQ, my card has a jumper on-board for enabling IRQ. Could this be
the case with yours ?

> The latest Mesa seems to have some problems too, regarding
> texture management. However, I saw those before applying your Mesa patch, too.
> 
I ported the DRI mach64 driver over to the common infrastructure for texture
memory management in bug #7260. Does this help with the problems you saw ?

> I'll probably push your patches beginning of the week.

Great! Thanks for your time and feedback.
Comment 35 Felix Kühling 2006-10-02 07:43:13 UTC
I saw that you have GIT commit access by now, at least to Xorg. Do you have
write access to DRM and Mesa as well? In that case, do you want to push your
changes yourself?
Comment 36 Adam Jackson 2006-10-02 08:35:47 UTC
(In reply to comment #35)
> I saw that you have GIT commit access by now, at least to Xorg. Do you have
> write access to DRM and Mesa as well? In that case, do you want to push your
> changes yourself?

He does now.
Comment 37 George - 2006-10-02 09:23:04 UTC
(In reply to comment #36)
> (In reply to comment #35)
> > I saw that you have GIT commit access by now, at least to Xorg. Do you have
> > write access to DRM and Mesa as well? In that case, do you want to push your
> > changes yourself?
> 
> He does now.

I can definately handle the xorg and drm GIT parts. Is it ok for you to commit
to Mesa CVS ? However strange, I am much more confortable with git than with cvs.
Comment 38 Felix Kühling 2006-10-02 10:07:57 UTC
(In reply to comment #37)
> (In reply to comment #36)
> > (In reply to comment #35)
> > > I saw that you have GIT commit access by now, at least to Xorg. Do you have
> > > write access to DRM and Mesa as well? In that case, do you want to push your
> > > changes yourself?
> > 
> > He does now.
> 
> I can definately handle the xorg and drm GIT parts. Is it ok for you to commit
> to Mesa CVS ? However strange, I am much more confortable with git than with cvs.
> 

Ah, the new generation. ;-) Sure. I'll watch the commit mailing lists and commit
once you're done.
Comment 39 Felix Kühling 2006-10-02 13:45:11 UTC
I saw you committed your DRM and DDX changes. I just committed your Mesa changes
too. Moving to Fixed.
Comment 40 Alex Deucher 2006-10-02 14:06:20 UTC
With these changes can we finally turn on mach64 DRI support by default in the
ati DDX?
Comment 41 Felix Kühling 2006-10-02 14:37:56 UTC
(In reply to comment #40)
> With these changes can we finally turn on mach64 DRI support by default in the
> ati DDX?

I thought it was. I didn't need to do anything special to build it with DRI
support. I guess it was just never enabled on standard distros because the
default kernels didn't have a DRM driver.
Comment 42 George - 2006-10-03 05:53:44 UTC
(In reply to comment #41)
> (In reply to comment #40)
> > With these changes can we finally turn on mach64 DRI support by default in the
> > ati DDX?
> 
> I thought it was. I didn't need to do anything special to build it with DRI
> support. I guess it was just never enabled on standard distros because the
> default kernels didn't have a DRM driver.

I guess the question is if the mach64 DRM is secure now. Personally, I have not
done security auditing, this bug report is about fixing the last known issue in
securing the driver. So I guess the status is "pending last serious security
auditing".

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.