It is theorized that the Adobe Flash Player renders frames with 4× FSAA (take a look at a histogram of a rendering). This behaviour should be matched in SWFDec to avoid rendering errors caused by premature anti-aliasing. To achieve this: • Anti-aliasing must be disabled • Frame must be rendered at 4× final dimensions • Frame must be resampled to final dimensions with sufficient per-pixel samples to get good results I have been hacking around with the ImageSurface, which takes 4 samples when painting a scaled source surface. Because of this, it's necessary to resample over several size-halving iterations to maintain the best quality rendering. The process of creating and painting these intermediate surfaces is far too slow for real-time rendering, but then I don't think we're using ImageSurface in the backend anyway - I think it's an XlibSurface provided by GDK. What I'd like to know is first of all how well SWFDec can render at 4× size, without performing the resampling. In my experience with Cairo for Python, things get slow very quickly, and this worries me somewhat. Is the bottleneck actually in painting the massive surface to screen (which wouldn't be necessary if we resampled)? Unfortunately due to broken Ubuntu Edgy repositories, I'm unable to get the build dependencies for SWFDec right now so I can't answer this question myself ATM. Thanks
I don't think that pixel accuracy is necessarily a good goal. That means that swfdec would have to be bug-for-bug compatible with the Flash renderer, which also means ditching cairo and writing a new, flash-bug-compatible drawing library.
Maybe, but there's no two ways to argue that doing premature AA is causing rendering errors. I still say this is a bug in Cairo, but I don't think that's a popular opinion.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.