Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I only skimmed the article, but that strikes me as a "mistranslation".

Adobe is emphasizing that they have to do more compositing logic, so they they need an RGB version of the video frame on the CPU side.

If they did the colorspace conversation on the GPU, they'd have to pull the converted image back and incur the latency hit. Apparently, they see less latency in doing the conversation on the CPU, and have made the call to trade processor efficiency for latency.

To be clear: YUV colorspace conversion on a GPU is really damn simple. I have a ~30 line shader that does it from my video processing codebase. But I can take the hit - I do a substantial amount of image manipulation using shaders on the GPU, and mask the read latency with heavy multi-threading. A Flash application doesn't have this luxury.

But this discussion is largely academic - if you're writing a Flash based video player that doesn't need the flexibility of Flash's full compliment of image manipulation and composting functionality, you'd be using their StageVideo API - an API that does do all the video work on the GPU. This API was introduced in Flash 10.2, which came out after this article from 2010.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: