Bug 2968 - Clean up the huge plethora of MIN() macro definitions in XFree86 server.
Summary: Clean up the huge plethora of MIN() macro definitions in XFree86 server.
Status: RESOLVED FIXED
Alias: None
Product: xorg
Classification: Unclassified
Component: Server/General (show other bugs)
Version: git
Hardware: x86 (IA32) Linux (All)
: high normal
Assignee: Mike A. Harris
QA Contact: Xorg Project Team
URL:
Whiteboard:
Keywords: janitor
Depends on:
Blocks:
 
Reported: 2005-04-11 05:00 UTC by Mike A. Harris
Modified: 2009-06-03 10:46 UTC (History)
2 users (show)

See Also:
i915 platform:
i915 features:


Attachments
minmax.c (2.64 KB, text/plain)
2005-04-17 23:42 UTC, Mike A. Harris
no flags Details
minmax.c (4.06 KB, text/plain)
2005-04-18 05:07 UTC, Mike A. Harris
no flags Details

Description Mike A. Harris 2005-04-11 05:00:32 UTC
This is somewhat of a janitorial cleanup issue.  The X server has numerous
definitions of MIN()/MAX() macros throughout it, many of which are just
plain wrong.  There is almost no consistency:

[mharris@devel Xserver]$ grep -ri 'define\WMIN(' *
Xprint/pcl/Pcl.h:#define MIN(a,b) (((a)<(b))?(a):(b))
Xprint/ps/Ps.h:#define MIN(a,b) (((a)<(b))?(a):(b))
dix/main.c:#define MIN(a,b) (((a) < (b)) ? (a) : (b))
fb/fbtrap.c:#define MIN(a, b) ((a) < (b) ? (a) : (b))
hw/xfree86/drivers/apm/apm_regs.h:#define MIN(a,b)      ((a) < (b) ? (a) : (b))
hw/xfree86/drivers/glide/glide_driver.c:#define MIN(a,b) ((a) < (b) ? (a) : (b))
hw/xfree86/drivers/glint/pm2_video.c:#define MIN(a, b) (((a) < (b)) ? (a) : (b))
hw/xfree86/drivers/siliconmotion/smi_video.c:#define MIN(a, b) (((a) < (b)) ?
(a) : (b))
hw/xfree86/drivers/vmware/vmware.h:#define MIN(a,b) ((a)<(b)?(a):(b))
hw/xfree86/common/xf86Bus.c:#define MIN(x,y) ((x<y)?x:y)
hw/xfree86/common/xf86pciBus.c:#define MIN(x,y) (((x)<(y))?(x):(y))
hw/xfree86/common/xf86pciBus.c.Xserver-common-fix-broken-MIN-macro-definition:#define
MIN(x,y) ((x<y)?x:y)
hw/xfree86/etc/MTRR-Lynx.shar:X#define min(a,b) MIN(a,b)
hw/xfree86/loader/aoutloader.c:#define MIN(a,b) ((a)<(b)?(a):(b))
hw/xfree86/shadowfb/shadow.c:#define MIN(a,b) (((a)<(b))?(a):(b))
hw/xfree86/xf4bpp/OScompiler.h:#define MIN(a,b) (((a)<(b))?(a):(b))
hw/xfree86/xf86cfg/text-mode.c:#define  MIN(a, b)       ((a) < (b) ? (a) : (b))
include/misc.h:#define min(a, b) (((a) < (b)) ? (a) : (b))
miext/rootless/rootlessCommon.h:#define MIN(x,y) ((x) < (y) ? (x) : (y))


At a bare miniumum, all of these macros should be defined correctly,
and consistenly.  An improvement over that, would be defining them
in one place and using them everywhere.

An additional improvement over that, would be using Linux kernel style
min()/max() macros which are much safer but require usage modification.
Comment 1 Roland Mainz 2005-04-11 18:45:39 UTC
> An additional improvement over that, would be using Linux kernel style
> min()/max() macros which are much safer but require usage modification.

How do these macros look like (I am curious here since many C compilers also
provide functions called |min()|, |max()| and |abs()| and defining macros with
the same name calls for trouble) ?
Comment 2 Mike A. Harris 2005-04-17 03:13:12 UTC
From:  linux-2.6.11.7/include/linux/kernel.h



/*
 * min()/max() macros that also do
 * strict type-checking.. See the
 * "unnecessary" pointer comparison.
 */
#define min(x,y) ({ \
        typeof(x) _x = (x);     \
        typeof(y) _y = (y);     \
        (void) (&_x == &_y);            \
        _x < _y ? _x : _y; })

#define max(x,y) ({ \
        typeof(x) _x = (x);     \
        typeof(y) _y = (y);     \
        (void) (&_x == &_y);            \
        _x > _y ? _x : _y; })

/*
 * ..and if you can't take the strict
 * types, you can specify one yourself.
 *
 * Or not use min/max at all, of course.
 */
#define min_t(type,x,y) \
        ({ type __x = (x); type __y = (y); __x < __y ? __x: __y; })
#define max_t(type,x,y) \
        ({ type __x = (x); type __y = (y); __x > __y ? __x: __y; })


Comment 3 Alan Coopersmith 2005-04-17 09:07:42 UTC
Those also require #if __GNUC__, since typeof() is a gcc-ism.
Comment 4 Mike A. Harris 2005-04-17 17:16:17 UTC
Correct, typeof is a gcc extension, so a fallback is required for non-gcc
compilers.  Here is an implementation of the new macros with fallbacks
for compatibility with non-gcc compilers for scrutiny:


#if __GNUC__
/*
 * min()/max() macros that perform strict type checking using gcc's typeof
 * extension.
 */
#define min(x,y) ({ \
        typeof(x) _x = (x);     \
        typeof(y) _y = (y);     \
        (void) (&_x == &_y);            \
        _x < _y ? _x : _y; })

#define max(x,y) ({ \
        typeof(x) _x = (x);     \
        typeof(y) _y = (y);     \
        (void) (&_x == &_y);            \
        _x > _y ? _x : _y; })

/*
 * min_t()/max_t() allow you to specify the type yourself
 */
#define min_t(type,x,y) \
        ({ type __x = (x); type __y = (y); __x < __y ? __x: __y; })
#define max_t(type,x,y) \
        ({ type __x = (x); type __y = (y); __x > __y ? __x: __y; })
#else
/*
 * min()/max() are standard versions of these macros without type checking,
 * for compatibility with compilers that do not support the gcc typeof
 * extension.
 */
#define min(x,y) ((x) < (y) ? (x) : (y))
#define max(x,y) ((x) > (y) ? (x) : (y))
/* min_t()/max_t() are versions of the gcc specific macros that fall back
 * to using the above standard min()/max() macros for compatibility with
 * non-gcc compilers.
#define min_t(type,x,y) (min((x),(y)))
#define max_t(type,x,y) (max((x),(y)))
#endif


The big question, is where should this code live, so that it is accessible
to both the client side code and server side code without duplicating
it?

If someone suggests a good place for it, I'll go ahead and whip up a
patch for review that puts the new macros in a single spot and updates
the whole tree to replace all MIN/MAX, etc. macros and usage, with usage
of the new typesafe macros.

While non-gcc platforms wont benefit directly from this, building using
gcc will spot problems that can be fixed on all platforms, so this is
a good improvement that will be worthwhile.
Comment 5 Roland Mainz 2005-04-17 17:24:35 UTC
Some comments on this:
The macro name should be all uppercase as |min()|, |max()|&co. are already taken
by math library functions and everything ending with |_t| may cause a future
clash with ANSI-C/C++ standards (think about |size_t|, |off_t| etc.).
Additionally the generation of a temporary variable to avoid the double
execution of the arguments (|a|, |b|) is useless in this case as almost all
consumers already do that and/or the compiler itself takes care about this - it
only adds more complexity in the code without benefits.
So I suggest to just keep the |MIN()| and |MAX()| macros and stuff them in a
central location - for the Xserver code that is likely Xserver/include/os.h
I can make a patch tonight if that's OK here...
Comment 6 Mike A. Harris 2005-04-17 18:11:55 UTC
>The macro name should be all uppercase as |min()|, |max()|&co. are already taken
>by math library functions and everything ending with |_t| may cause a future
>clash with ANSI-C/C++ standards (think about |size_t|, |off_t| etc.).

That sounds fine to me.  I used lowercase only because that's what most
code tends to use, and I was using the kernel headers as an example
to work from, and that used lowercase as well.  Using uppercase however
has an additional advantage by means of common convention of macros
being uppercase highlighting that they are macros to begin with.

I would suggest using:

MIN()/MAX()/MIN_T()/MAX_T()


>Additionally the generation of a temporary variable to avoid the double
>execution of the arguments (|a|, |b|) is useless in this case as almost all
>consumers already do that and/or the compiler itself takes care about this - it
>only adds more complexity in the code without benefits.

I disagree completely.  The whole point of the kernel style macros, is to
make it more difficult to use the macros incorrectly in a way that can
result in unintended errors.  They are intentionally strict and robust,
to catch unintentional errors from happening.

The complexity is completely contained within the macros themselves, which
are already written.  This harms nothing, and it does bring benefit.  The
benefit occurs when someone adds code that is not already present that
uses a macro and causes double execution unintentionally.  The fact that
none of the existing usage cases do not cause double execution is not a
valid argument against preventing it from being a problem in future usage.

The code is written, so there is no good reason not to use it.


Comment 7 Roland Mainz 2005-04-17 18:22:05 UTC
(In reply to comment #6)
> >only adds more complexity in the code without benefits.
> 
> I disagree completely.  The whole point of the kernel style macros, is to
> make it more difficult to use the macros incorrectly in a way that can
> result in unintended errors.  They are intentionally strict and robust,
> to catch unintentional errors from happening.

And I disagree completely with the introduction of over-engineered macros.
Most compliers have extra hand-crafted optimisation for this kind of
|MIN()|/|MAX|-macros - but the new form will defeat this kind of optimisation.
Additionally they are an overkill and make the code far less understandable.
I am STRONGLY against it.
Comment 8 Mike A. Harris 2005-04-17 18:38:00 UTC
As per my suggested implementation in comment #4, the new style macros
are *only* used in the GCC case.  Other compilers will use the fallback,
which is the traditional min/max style macros:

#define min(x,y) ((x) < (y) ? (x) : (y))
#define max(x,y) ((x) > (y) ? (x) : (y))
#define min_t(type,x,y) (min((x),(y)))
#define max_t(type,x,y) (max((x),(y)))

What particular compiler are you refering to, and what specific problems
do you know this implementation would cause.  Please provide specific
examples, and then it can be determined wether there is a real problem
or not.  If there are, solutions can then be explored.
Comment 9 Roland Mainz 2005-04-17 18:40:49 UTC
BTW: The following code does not compile with non-gcc compilers and I am not
sure whether it's really valid ANSI-C:
-- snip --
#define min_t(type,x,y) \
        ({ type __x = (x); type __y = (y); __x < __y ? __x: __y; })
#define max_t(type,x,y) \
        ({ type __x = (x); type __y = (y); __x > __y ? __x: __y; })
-- snip --
Comment 10 Roland Mainz 2005-04-17 18:48:26 UTC
Mike A. Harris wrote:
> As per my suggested implementation in comment #4, the new style macros
> are *only* used in the GCC case.  Other compilers will use the fallback,
> which is the traditional min/max style macros:
>
> #define min(x,y) ((x) < (y) ? (x) : (y))
> #define max(x,y) ((x) > (y) ? (x) : (y))
> #define min_t(type,x,y) (min((x),(y)))
> #define max_t(type,x,y) (max((x),(y)))

Erm... that would mean that gcc compilers execute |x| and |y| once and non-gcc
compilers would execute the code twice - which would lead to unexpected results
if someone (theoretically) tries to do something like |res = MIN(*s++, 90);|
(it's one of the common traps of the past in the C language, thje book "Von C to
C" (I don't have the ISBN number right now) has really worse examples how things
can go wrong then... ;-().

> What particular compiler are you refering to, and what specific problems
> do you know this implementation would cause.  Please provide specific
> examples, and then it can be determined wether there is a real problem
> or not.  If there are, solutions can then be explored.

Almost every non-gcc compile counts here, I can test here Sun Workshop/Forte and
the native IRIX compiler currently (since I now have SPARC and IRIX/MIPS
machines at home).
Comment 11 Mike A. Harris 2005-04-17 18:54:34 UTC
The "typeof" operator, and the ({ }) syntax are both gcc specific extensions.
That is why they are conditionalized in the code to only be used if you
are using gcc.  The purpose of using typeof, is to take advantage of this
feature of gcc when using gcc as a compiler.  The ({}) syntax however,
which originates in the kernel code I used as a basis to start from, is
unnecessary, and I don't think supported in all gcc versions, so should
be changed to more traditional syntax.  I'll do that in my first patch.
Comment 12 Roland Mainz 2005-04-17 18:59:18 UTC
(In reply to comment #11)
>  I'll do that in my first patch.

How do you want to prevent the inconsistency between platforms I have described in 
 comment #10 ?
Comment 13 Mike A. Harris 2005-04-17 19:02:01 UTC
>Erm... that would mean that gcc compilers execute |x| and |y| once and non-gcc
>compilers would execute the code twice - which would lead to unexpected results
>if someone (theoretically) tries to do something like |res = MIN(*s++, 90);|
>(it's one of the common traps of the past in the C language, thje book "Von C to
>C" (I don't have the ISBN number right now) has really worse examples how things
>can go wrong then... ;-().

Yes, that is correct.  The current macro use in the tree does just that, so
this is no worse than what is in the tree right *now*.  The current macros
are not all written the same way, and have many other problems.  The current
usage of those macros just "luckily" do not trigger bugs in the macros.

The solution I am proposing, merely centralizes all of these macros to
one single location, using a single standard set of macros that are then
used everywhere in the tree instead of having 200 copies of them with
different and broken implementations.

This generalized macro then is the default for all compilers, and this is
better than what is in the tree right now, and has been forever.  Further
improvements to the general case, which work on multiple compilers and
prevent other problems such as double inclusion would be nice improvments
also.

The gcc specific case is intended to use additional features of gcc to
improve the macros further.  The gcc specific case only directly improves
things when using gcc to compile.  But since the code will be compiled
on various platforms with and without gcc, the other non-gcc platforms
benefit from the improvement indirectly, because when we fix a bug on
Linux with gcc that is discovered by this gcc specific extension, that
fixes the same bug on systems that do not use gcc, and which the compiler
would gladly generate bad code.
Comment 14 Mike A. Harris 2005-04-17 19:05:17 UTC
> How do you want to prevent the inconsistency between platforms I have
described > in comment #10 ?

I don't see it as a problem to begin with.  Other compilers don't have
"typeof" necessarily, but that is not an argument against conditionally
using this mechanism on Linux+gcc case.  As I said above, this "inconsistency"
provides indirect benefit to those platforms that do not support this
extension, because bugs will be more likely to _be_ discovered when
_using_ gcc, which then also fixes the same bug for platforms that do
not have a compiler with this feature.

Comment 15 Roland Mainz 2005-04-17 19:10:40 UTC
(In reply to comment #13)
> >Erm... that would mean that gcc compilers execute |x| and |y| once and non-gcc
> >compilers would execute the code twice - which would lead to unexpected results
> >if someone (theoretically) tries to do something like |res = MIN(*s++, 90);|
> >(it's one of the common traps of the past in the C language, thje book "Von C to
> >C" (I don't have the ISBN number right now) has really worse examples how things
> >can go wrong then... ;-().
> 
> Yes, that is correct.  The current macro use in the tree does just that, so
> this is no worse than what is in the tree right *now*.  The current macros
> are not all written the same way, and have many other problems.  The current
> usage of those macros just "luckily" do not trigger bugs in the macros.

I have nothing against defining one proper version of |MIN()|, |MAX| etc in a
default location. However I am strongly against the introduction of a macros
which behave differently based on the compiler being used.
If you want to introduce such a horror please consult xorg-arch FIRST before
adding something like that.

> The solution I am proposing, merely centralizes all of these macros to
> one single location, using a single standard set of macros that are then
> used everywhere in the tree instead of having 200 copies of them with
> different and broken implementations.

The proposed addition of |min_t| and |max_t| is even more broken and not
considered "good" style in most projects I am aware of. I doubt such an addition
would pass the code review of Amersham, KDE, mozilla.org or Sun.

If you still want to add the new macros please ask for approval in xorg-arch first.
Comment 16 Mike A. Harris 2005-04-17 19:12:55 UTC
This is only a proposal.  I intend to refine things until I think they're
ready, and then submit for discussion to xorg-arch.  That was the intention
all along.  Discussing it beforehand with a small number of people is a good
way of working the kinks out without inciting a major bikeshedding
discussion.
Comment 17 Roland Mainz 2005-04-17 19:14:44 UTC
(In reply to comment #14)
> > How do you want to prevent the inconsistency between platforms I have
> described > in comment #10 ?
> 
> I don't see it as a problem to begin with.

I do as this kind of issue (e.g. platform/compiler-specific handling of the
number of argument invokation in a CPP macro) is VERY hard to find as it is
hidden in some header and not in the main code. |MIN()| and |MAX()| have become
the more or elss de-facto standard how it should work.
Adding the derived form which behaves differently on per-compiler/platform(!!)
basis is VERY bad and almost a taboo in portable programming.
Comment 18 Roland Mainz 2005-04-17 19:24:43 UTC
(In reply to comment #16)
> This is only a proposal.

OK... but I would suggest to simply #define |MIN()|/|MAX()|/|ABS()| with a
proper definition in a global location and do not try to reinvent the wheel with
 new code which may even add further problems. I assume that if you replace
|MIN()|/|MAX()|/|ABS()| with the min_t/max_t stuff the MIN()/MAX() macros will
reappear after some time because people are not aware of them and then the whole
mess starts again.

> I intend to refine things until I think they're
> ready, and then submit for discussion to xorg-arch.  That was the intention
> all along.

OKOK... :)
(and sorry for being little bit grumbling - but I had to debug such portability
issues for years and too often it was something deeply hidden in nested CPP
macros where half of them were unneccesary work as the compiler would have
already been smart enougth to do the same).

> Discussing it beforehand with a small number of people is a good
> way of working the kinks out without inciting a major bikeshedding
> discussion.

BTW: The type-checking argument is for /dev/null. Sun Workshop/Forte (and AFAIK
gcc, too) warn if the arguments for ?:-expressions differ in the datatype.
Comment 19 Mike A. Harris 2005-04-17 23:17:24 UTC
While working on this, I discovered that xc/programs/Xserver/include/misc.h 
already provides the min()/max() macros, and they're used heavily throughout
the server already.

pts/8 mharris@laser:~/rpmbuild/xorg-x11-6.8.2/xc/programs/Xserver$ grep -r
'\Wmin(' * |wc -l
    322


Therefore, there is long established practice of using lowercase names for
these in the X server.  The problem is that not everyone used them, instead
creating their own uppercase versions, mostly declared incorrectly.

Comment 20 Mike A. Harris 2005-04-17 23:39:11 UTC
Ok, I wrote a small simple test application to test the results of
the traditional min()/max() definitions against enhanced versions of
the macros using the gcc extensions.  I refer to the enhanced version
of the macros as "gccmin()/gccmax()" in the code, whereas min()/max()
are the traditional definitions.

I'll attach the code I'm using for testing for documentive and comparative
purposes.  Feedback/comments welcome.
Comment 21 Mike A. Harris 2005-04-17 23:42:02 UTC
Created attachment 2442 [details]
minmax.c

minmax.c is a small app to compare the operation of traditional style
min()/max() macros with versions of the macros rewritten to use gcc
extensions to prevent multiple execution from occuring when args are
passed that have side effects.
Comment 22 Mike A. Harris 2005-04-17 23:43:29 UTC
Results of compilation of above test utility:

pts/6 mharris@laser:~$ gcc -Wall minmax.c -o minmax
minmax.c: In function `main':
minmax.c:61: warning: comparison of distinct pointer types lacks a cast
minmax.c:61: warning: comparison of distinct pointer types lacks a cast
minmax.c:67: warning: comparison of distinct pointer types lacks a cast
minmax.c:69: warning: comparison of distinct pointer types lacks a cast


Results of invocation:

pts/6 mharris@laser:~$ ./minmax
Test cases for same-type comparisons:
Traditional min/max without side effects:  min(a,b)=10, max(a,b)=20
gccmin/gccmax without side effects      :  min(a,b)=10, max(a,b)=20
Traditional min with side effects   :  min(a++,b++)=11
Traditional max with side effects   :  max(a++,b++)=21
gccmin with side effects            :  min(a++,b++)=10
gccmax with side effects            :  max(a++,b++)=20

Testing for mismatched case in min/max comparisons.
Traditional min/max without side effects:  min(x,y)=10, max(x,y)=20
gccmin/gccmax without side effects      :  min(x,y)=10, max(x,y)=20
Traditional min with side effects   :  min(x++,y++)=11
Traditional max with side effects   :  max(x++,y++)=21
gccmin with side effects            :  min(x++,y++)=10
gccmax with side effects            :  max(x++,y++)=20
Comment 23 Mike A. Harris 2005-04-18 00:55:07 UTC
Thinking about this a bit more, I think it makes more sense to
separate the two issues here into:

1) Centralizing min()/max() macro usage and cleaning up the entire source
   code tree to use the centralized macros uniformly.

and

2) Discussing the benefits of using compiler specific enhancements to
   autodetect min/max misuse at compile time possibly issuing warnings
   and/or autocorrecting the behaviour.

Let's focus on problem #1 for now, and get closure on that first.  Then
we can resume discussion on if and how we might be able to apply newer
compiler features to enhance bug detection for particular cases.


Problem #1:

Since Xserver/include/misc.h already defines min() and max() and it's used
all over dix/mi and other places in the tree, we should change every file
elsewhere in the tree which uses MIN()/MAX() to use #include "misc.h" and
use the existing min()/max() macros.

If we were just adding these for the first time in a central location,
I would favour the uppercase versions as previously indicated, however
since the tree already declares them lowercase and it's used extensively
in over 300 locations, I think we should leave the definition as lowercase
because there are out of tree users of mi/dix such as VNC and various
other servers.  However, it might be useful to add uppercase definitions
to misc.h as:

#define MIN min
#define MAX max

That way if someone tries to define them themselves and has misc.h
included, the compiler will yell.

Does anyone see a problem with making all in-tree usage in lowercase
to match what is already there for consistency?


Comment 24 Mike A. Harris 2005-04-18 00:55:43 UTC
Adding xorg-team back, so others are seeing this too.  ;)
Comment 25 Mike A. Harris 2005-04-18 04:47:16 UTC
Ok, for problem #2...   I came up with a kindof nifty solution.

By using gccisms for the gcc case, I think we can get the benefits
of gcc doing type checking, but avoid using the double prevention
inclusion by doing the following:

#define min(x,y) ({ \
        typeof(x) _x = (x);     \
        typeof(y) _y = (y);     \
        (void) (&_x == &_y);            \
        ((x) < (y) ? _x : _y); })

This way, both x and y are evaluated twice, just as in the traditional
min/max implementation, but we still get type checking for free.  I think
this should provide results that are consistent with what the traditional
macros would provide.

This is just theoretical suggestion for now though.  Before I officially
propose it, I want to add tests to my test app to verify that it works
as I think it should.

It's bedtime for me though, so I wont get to that until tomorrow night
or later though.
Comment 26 Mike A. Harris 2005-04-18 04:49:44 UTC
Oops...  I made a mistake there.  Only one arg should be double
evaluated.. not sure what I was thinking.  Here is the corrected version:


#define min(x,y) ({ \
        typeof(x) _x = (x);     \
        typeof(y) _y = (y);     \
        (void) (&_x == &_y);            \
        (_x < _y ? (x) : (y)); })
Comment 27 Mike A. Harris 2005-04-18 05:06:46 UTC
Ok, I decided it'd only take like 5 minutes, so I went ahead and
tested it anyway.  Here are the results of the testing, with the
latest copy of the test util to be attached shortly:


pts/6 mharris@laser:~$ gcc -Wall minmax.c -o minmax
minmax.c: In function `main':
minmax.c:78: warning: comparison of distinct pointer types lacks a cast
minmax.c:78: warning: comparison of distinct pointer types lacks a cast
minmax.c:84: warning: comparison of distinct pointer types lacks a cast
minmax.c:86: warning: comparison of distinct pointer types lacks a cast
minmax.c:92: warning: comparison of distinct pointer types lacks a cast
minmax.c:92: warning: comparison of distinct pointer types lacks a cast
minmax.c:98: warning: comparison of distinct pointer types lacks a cast
minmax.c:100: warning: comparison of distinct pointer types lacks a cast


This shows that gcc flags comparisons between different types with
compiler warnings, giving us a benefit over the traditional macros.

But does it generate different results than the traditional macros?
Lets find out...

Test cases for same-type comparisons:
Traditional min/max without side effects:  min(a,b)=10, max(a,b)=20
gccmin/gccmax without side effects      :  min(a,b)=10, max(a,b)=20
Traditional min with side effects   :  min(a++,b++)=11
Traditional max with side effects   :  max(a++,b++)=21
gccmin with side effects            :  min(a++,b++)=10
gccmax with side effects            :  max(a++,b++)=20

Testing for mismatched type in min/max comparisons.
Traditional min/max without side effects:  min(x,y)=10, max(x,y)=20
gccmin/gccmax without side effects      :  min(x,y)=10, max(x,y)=20
Traditional min with side effects   :  min(x++,y++)=11
Traditional max with side effects   :  max(x++,y++)=21
gccmin with side effects            :  min(x++,y++)=10
gccmax with side effects            :  max(x++,y++)=20

Testing for mismatched type in min/max comparisons without double exec protection.
Traditional min/max without side effects :  min(x,y)=10, max(x,y)=20
gccmin/gccmax noprot without side effects:  min(x,y)=10, max(x,y)=20
Traditional min with side effects        :  min(x++,y++)=11
Traditional max with side effects        :  max(x++,y++)=21
gccminnoprot with side effects           :  min(x++,y++)=11
gccmaxnoprot with side effects           :  max(x++,y++)=21


Since each of the args are evaluated the same number of times as the
traditional min/max macros do, the double execution protection is now
removed, and we get the same results for both the traditional case
and the type checked case using gcc extensions.

I'll attach the test app now...
Comment 28 Mike A. Harris 2005-04-18 05:07:34 UTC
Created attachment 2443 [details]
minmax.c
Comment 29 Mike A. Harris 2005-04-18 05:24:58 UTC
Since there was previous concerns about potential performance differences
of the different macro implementations, I compiled the source with on
x86 architecture using the following commandline options which are
quite common:

gcc -Wall -O2 -march=i386 -mcpu=i686 --save-temps minmax.c -o minmax

Reviewing the compiler generated minmax.s assembler file shows that
gcc generates identical code for both the traditional min()/max()
and the gcc{min,max}noprot() versions, at least for the cases in
which the values are known at compile time.

I believe the results should be the same for values that can't be
known at compile time however, but if someone thinks they might
differ, I'd be more than happy to add an additional test case for
that and review the generated assembler.

I can test and review the generated code on x86_64, ia64, ppc, ppc64,
s390, and s390x additionally if there are concerns about optimizational
variances that might occur between architectures.  I could write
a simple shell script to do this in our buildsystems in about
10 minutes, so if anyone has concerns about performance, let me know.
Comment 30 Daniel Stone 2007-02-27 01:26:12 UTC
Sorry about the phenomenal bug spam, guys.  Adding xorg-team@ to the QA contact so bugs don't get lost in future.
Comment 31 Adam Jackson 2009-06-03 10:46:23 UTC
Fixed in git


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.