It would be "good" if the input device selection for a touchscreen, or indeed any absolute indexed pointer device, could be associated to a screen. In the alternate there should be a way to assign such a pointing device to a screen at runtime. The value of this should be evident to anybody who has touch/dragged a touch screen and had it select text on the wrong screen. Optimally the assignment should follow the relative origin of the virtual X,Y origin of the screen as it moves around the virtual desktop.
Please see the transformation matrix documentation: http://wiki.x.org/wiki/XInputCoordinateTransformationMatrixUsage
(In reply to comment #1) > Please see the transformation matrix documentation: > > http://wiki.x.org/wiki/XInputCoordinateTransformationMatrixUsage So there _is_ a way to do this, but it is horrifically unfriendly, it cannot "follow" a monitor that is a movable or resizeable viewport without running an external command whenever the viewport moves, and the external command would have to be re-built whenever the geometry of the setup changes (such as docking a laptop to different numbers/types of displays). In the ideal (I know, I am sounding lazy here 8-) there could/should be something like Section "InputDevice" Identifier "SomeTouchscreen" ... EndSection Section "ServerLayout" Screen 0 "SomeScreen" Screen 1 "SomeOtherScreen" LeftOf 0 ContainsPointer "SomeTouchscreen" EndSection --or-- Section "InputDevice" ... Option "BoundTo" "SomeOtherScreen" EndSection [Not that I have really thought about the correct naming.] It looks, from reading the link you provided, that it should be reasonable to hook up the transformation that the xinput command does directly to the screen mode and offset (with an event trap to notice when/if the origin and/or extent of a display changes). I am insufficient to the task of doing this at this time as I know nothing of the necessary internals. After years of being dead in the water, the touchscreen market has been enlivened by the smart-phone. Things like the MIMO 720S (displaylink+touchscreen device) are coming to market and having a uniform way to configure same as a "not surprising" ancillary device would be good. It is very surprising to have a touchscreen scale onto, or select across, one of the other monitors. Another usage for the "Contained" pointer would be gaming on a multi-headed system, or several uses of an extended virtual desktop. Having a "second" mouse that will neither "escape" a given display nor move the viewport would be hugely useful in several scenarios I can think of.
(In reply to comment #2) > So there _is_ a way to do this, but it is horrifically unfriendly, it cannot > "follow" a monitor that is a movable or resizeable viewport without running an > external command whenever the viewport moves, and the external command would > have to be re-built whenever the geometry of the setup changes (such as docking > a laptop to different numbers/types of displays). correct. This is supposed to be handled by the desktop environment. Please file a bug with GNOME/KDE/whatever your preferred DE is. This will _not_ be integrated in the drivers, there are too many user-specific variables. > Screen 1 "SomeOtherScreen" LeftOf 0 ContainsPointer "SomeTouchscreen" > ... > Option "BoundTo" "SomeOtherScreen" this doesn't fully handle: - on the fly mapping (docking stations, external projectors) - mapping to sections of the screen instead of the whole screen - dynamic layouts - non-XRandR drivers (nvidia, hooray) - screen rotation being different to device rotation > [Not that I have really thought about the correct naming.] > > It looks, from reading the link you provided, that it should be reasonable to > hook up the transformation that the xinput command does directly to the screen > mode and offset (with an event trap to notice when/if the origin and/or extent > of a display changes). correct. the wacom driver's xsetwacom has a MapToOutput command that does exactly that (without the event trap). It'd be trivial to rip the code out and add it to xinput if you're inclined to do so. > Another usage for the "Contained" pointer would be gaming on a multi-headed > system, or several uses of an extended virtual desktop. Having a "second" mouse > that will neither "escape" a given display nor move the viewport would be > hugely useful in several scenarios I can think of. PointerBarriers are better for containment of relative devices.
(In reply to comment #3) > > Screen 1 "SomeOtherScreen" LeftOf 0 ContainsPointer "SomeTouchscreen" > > ... > > Option "BoundTo" "SomeOtherScreen" > > this doesn't fully handle: > - on the fly mapping (docking stations, external projectors) > - mapping to sections of the screen instead of the whole screen > - dynamic layouts > - non-XRandR drivers (nvidia, hooray) > - screen rotation being different to device rotation I think all of these could be handled by three properties. One which merely set the device's active area (think Synaptics Area), one which bound it to (a region of, defaulting to all of) a CRTC, and one boolean for 'rotate with the CRTC'. At that stage you're just left with quite remarkably esoteric corner cases that I can see, and it does actually solve a real problem that a lot of people have without requiring any kind of client smarts. Non-RandR 1.2 drivers lose I guess, but that's not our fault or problem.
(In reply to comment #3) > (In reply to comment #2) > > Screen 1 "SomeOtherScreen" LeftOf 0 ContainsPointer "SomeTouchscreen" > > ... > > Option "BoundTo" "SomeOtherScreen" > > this doesn't fully handle: > - on the fly mapping (docking stations, external projectors) > - mapping to sections of the screen instead of the whole screen > - dynamic layouts > - non-XRandR drivers (nvidia, hooray) > - screen rotation being different to device rotation That's why it would be optional. There are very few cases where you would want a touch-screen to map to things not visible on that screen. The use of the touch screen, in the case of things like this mini/outboard touchscreen, as a window/view-port into a larger desktop would lead you to want to be able to move the view port and push buttons that were so revealed. So you would want the pointer constrained to the display that the related screen displays, but likely want a way to initiate movement of that viewport if the underlying screen is motile over the display. (e.g. when the viewport isn't locket then you might want edge panning support.) The extension of the touchscreen model to non-touch devices is just the generic transformation of the idea (which should make the code easier).
How and When did this go to status RESOLVED and FIXED? I don't feel I have the project cred to change it myself, but it looks awfully "under-discussion" to me.
(In reply to comment #4) > I think all of these could be handled by three properties. One which merely > set the device's active area (think Synaptics Area), we need _something_ like this since it can also be used for calibration. However, I think it requires deeper driver integration and I'm not sure yet how to implement it. > one which bound it to (a region of, defaulting to all of) a CRTC, that's what we already have. > and one boolean for 'rotate with the CRTC'. you know my stance on that. If we have something that rotates the CRTC, we should teach it to rotate the input device too. This boolean is the simple solution now but I think it'd even hurt us in the long term. > Non-RandR 1.2 drivers lose I guess, but that's not our fault or problem. it is. denying nvidia's market share (especially where wacom tablets are concerned) doesn't fix the problem.(In reply to comment #6) > How and When did this go to status RESOLVED and FIXED? > > I don't feel I have the project cred to change it myself, but it looks awfully > "under-discussion" to me. We have a method to bind an absolute device to any region on the desktop (including screens) - which is how I understand comment 0. For moving the viewport around please file a separate bug.
(In reply to comment #7) > (In reply to comment #4) > > How and When did this go to status RESOLVED and FIXED? > > > > I don't feel I have the project cred to change it myself, but it looks awfully > > "under-discussion" to me. > > We have a method to bind an absolute device to any region on the desktop > (including screens) - which is how I understand comment 0. For moving the > viewport around please file a separate bug. Where (in the config file) do I bind an input device to a screen? Manually calculating a set of floating point values for every co-variant screen layout possible on, say, a laptop and then writing a script to invoke the correct one on each Xorg server start is beyond the average users' likely skill level. For instance, on my laptop I have between one and three possible output scenarios _before_ I consider plugging in the displaylink+touchscreen thing. Further the ratio of the touch-screen is nowhere near compatable with the transformation matrix. In point of fact the Screen 1 in my ideal scenario woudln't be part of the main multi-screen layout (e.g. it would be display :0.1 in most usages). The xinput transformation thing is not a reasonable solution beyond some _very_ simple cases.
(In reply to comment #7) > (In reply to comment #4) > > How and When did this go to status RESOLVED and FIXED? > > > > I don't feel I have the project cred to change it myself, but it looks awfully > > "under-discussion" to me. > > We have a method to bind an absolute device to any region on the desktop > (including screens) - which is how I understand comment 0. For moving the > viewport around please file a separate bug. (Invoking my internet granted right to meta-argue). Title of bug: Absolute Pointer Devices (touchscreens) should be screen specific And restating comment 0 (In reply to comment #0) > It would be "good" if the input device selection for a touchscreen, or indeed > any absolute indexed pointer device, could be associated to a screen. > > In the alternate there should be a way to assign such a pointing device to a > screen at runtime. > > The value of this should be evident to anybody who has touch/dragged a touch > screen and had it select text on the wrong screen. > > Optimally the assignment should follow the relative origin of the virtual X,Y > origin of the screen as it moves around the virtual desktop. This bug was "resolved" and "fixed" by referencing a way to tie a pointer to a section of the virtual desktop using xinput and math. This in no way "resolves" or "fixes" the bug. Opening a second bug with the same request is not useful nor conducive. The resolving of the bug was improper.
Committed too soon, sorry... Opening a fresh bug with exactly the same request that pointer devices should be able to bind to/follow/be relative to _screens_, seems unhelpful and unlikely to result in a different outcome. Limiting the pointer to part of the desktop is a hackish means of simulating the effect of binding a pointer to a screen and is beyond reasonable expectation for the normal user when things get nontrivial. If the pointer was tied to the screen it would follow the properties of the screen when moving it across the desktop, viewport, when switching modelines, when the screen was part of one large or one of several small desktop regions (e.g. display 0.0 vs 0.1 etc). Pointers, particularly absolute coordinate pointers, should have a selectable natural affinity for screens. The word screen is even part of the word touchscreen. So while you may have taken "tied to screen" to mean "constrained to region of desktop", they are not the same thing at all. For instance, I should be able to go into /etc/X11/xorg.conf.d and create a fragment file listing my USB touchscreen device, inputdevice, possibly-empty monitor, and screen sections, where the input device is bound to the screen. When the device is plugged in, that fragment file should marry it all up regardless of whether I am using the my laptop on my lap, or docked to my TV or docked to my high-resolution display. The LCD or LCD+TV or LDC+HDTV or LCD+monitor create some very odd virtual desktops, particularly since my LCD is 1366x768. Tossing in the 800x600 plugable device makes the sets of maths and the plurality of combinations rather non-obvious and clearly non-atuomatic. This is the bug where I suggest we be able to bind pointers to screens with all which that entails, doing it in fragments and pieces (bind to view-port here, bind to virtual desktop region there, etc) is to beg later rewrites.
(In reply to comment #4) > (In reply to comment #3) > > > Screen 1 "SomeOtherScreen" LeftOf 0 ContainsPointer "SomeTouchscreen" > > > ... > > > Option "BoundTo" "SomeOtherScreen" > > > > this doesn't fully handle: > > - on the fly mapping (docking stations, external projectors) > > - mapping to sections of the screen instead of the whole screen > > - dynamic layouts > > - non-XRandR drivers (nvidia, hooray) > > - screen rotation being different to device rotation > > I think all of these could be handled by three properties. One which merely > set the device's active area (think Synaptics Area), one which bound it to (a > region of, defaulting to all of) a CRTC, and one boolean for 'rotate with the > CRTC'. At that stage you're just left with quite remarkably esoteric corner > cases that I can see, and it does actually solve a real problem that a lot of > people have without requiring any kind of client smarts. > > Non-RandR 1.2 drivers lose I guess, but that's not our fault or problem. At some point the view port of the display is projected over the virtual desktop of which it is a member. I don't know if the code in charge of this is part of RandR (seems like it would be) nor what handles this for "non RandR" drivers (again, I don't have enough knowledge of the code base) but it seems that at that moment when the screen's size and position and orientation and virtual desktop (display=:0.0 vs. display=:0.1 etc) are known, the entirety of the transformation matrix information a-la xinput are already in-hand. The case I can think of for wanting a constrained pointer to be rotated distinctly from the viewing area would be a horizontal conference table type thing where the guy with the mouse might want to be "above" the image (as presenter to people across the desk) or below the image (normal user). This would never be inverted for a touch screen/touch-desk but it would be for a mouse. So I would think that rotating the pointer with the display would be normal and only a vanishingly small number of uses would be otherwise. So I have Section "InputDevice" ... Option "BindToScreen" 1 #default not-bound Option "RotateWithScreen" "true" #default true EndSection I don't see the third property off hand, but again, not familiar with the internals.
http://cgit.freedesktop.org/xorg/app/xinput/commit/?id=8563e64fa4eeaf7b56374fd6695f026d98f1696d xinput support for screen mapping
closing as wontfix. we have runtime tools to handle this situation that's pretty much all that will happen.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.