1. 程式人生 > >HEVC/H265 好的文章

HEVC/H265 好的文章

分享一下我老師大神的人工智慧教程!零基礎,通俗易懂!http://blog.csdn.net/jiangjunshow

也歡迎大家轉載本篇文章。分享知識,造福人民,實現我們中華民族偉大復興!

               

What You Should Know about The H.265 Video Codec

H.264's successor is coming...eventually. The video codec has been approved, but won't change the face of web video with support for 4K and 50% lower bandwidth costs until chips with hardware decoding capabilities are released in 2014.

H.264 is practically ubiquitous with HD video. Even if you don't know a thing about video codecs, you've probably heard of H.264. It's the video encoding process used for Blu-ray, Netflix, YouTube, and Vimeo. Watch a video on the web, and chances are H.264 encoding is responsible for delivering a great picture at a bitrate your Internet connection can handle--80 percent of web video now runs on the H.264 codec. But H.264 isn't ready for the monstrous amount of data it would take to encode 4K video, which is why a new standard is peeking over the horizon. It's called H.265.

Well, technically, H.265 goes by the name of HEVC, or High Efficiency Video Coding. As H.265's longer name implies, the video codec is designed to succeed H.264 with a more efficient encoding standard. That means it will support video files of 4K and 8K resolutions while simultaneously improving upon current streaming by cutting the required bitrate by up to 50 percent. The way it does this is relying on more specialized hardware and computing power--with improved compression algorithms and more time or a computer has to process a video, the more compact a video can be compressed without losing too much visual fidelity. H.265 will require more computational power than its predecessor, but the trade-off is a no-brainer. Computer processors, especially mobile CPUs, grow more powerful every year, but our network infrastructure and bandwidth speeds are growing much more slowly. HEVC theoretically will make current HD video streaming more efficient while paving the way for a future of 4K content.

H.265 took a big step towards actual implementation on Friday, as the International Telecommunication Union announced the standard had received first-stage approval. That doesn't mean H.265 is completely finished--the ITU's press release notes that HVEC extensions are still in development--but it's out of draft status and ready to be unleashed on the world. Of course, adoption can't happen overnight.

For H.265 to take off, chipmakers will have to release hardware that supports HEVC decoding. You're able to watch Netflix and YouTube on your smartphone because the graphics chip inside it can decode H.264 video, but those chips won't just magically work with H.265. A few companies like Broadcom have announced hardware chips with HEVC support; Broadcom's processor can handle 4096x2160p video at 60 frames per second, but volume production won't begin until mid-2014. 4K streaming likely won't be the immediate draw of H.265, though--we expect companies like Netflix to eagerly support the codec, since it will allow them to stream 1080p video at half the currently used bitrate.

Interested in a super technical description of how H.265 works compared to H.264? Read these snippets from theOverview of the HEVC Standard white paper:

"The core of the coding layer in previous standards was the macroblock, containing a 16×16 block of luma samples and...two corresponding 8×8 blocks of chroma samples; whereas the analogous structure in HEVC is the coding tree unit (CTU), which has a size selected by the encoder and can be larger than a traditional macroblock. The CTU consists of a luma CTB and the corresponding chroma CTBs and syntax elements. The size L×L of a luma CTB can be chosen as L = 16, 32, or 64 samples, with the larger sizes typically enabling better compression. HEVC then supports a partitioning of the CTBs into smaller blocks using a tree structure and quadtree-like signaling.

...new features are introduced in the HEVC standard to enhance the parallel processing capability or modify the structuring of slice data for packetization purposes. Each of them may have benefits in particular application contexts...

H.265 adoption will be slow until mass hardware support is behind it, which seems to be at least 18 months away.

1) Tiles: The option to partition a picture into rectangular regions called tiles has been specified.

2) Wavefront parallel processing: When wavefront parallel processing (WPP) is enabled, a slice is divided into rows of CTUs...WPP provides a form of processing parallelism at a rather fine level of granularity, i.e., within a slice. WPP may often provide better compression performance than tiles (and avoid some visual artifacts that may be induced by using tiles).

3) Dependent slice segments: A structure called a dependent slice segment allows data associated with a particular wavefront entry point or tile to be carried in a separate NAL unit, and thus potentially makes that data available to a system for fragmented packetization with lower latency than if it were all coded together in one slice."

H.265 adoption will be slow until mass hardware support is behind it, which seems to be at least 18 months away. And, despite how promising the codec seems, there's no guarantee it will completely take over the video world in the same way H.264 has. Google's open, royalty-free VP9 could take a shot at H.265 on the web, and the Blu-ray Disc Association hasn't announced any plans for future codec support. We may be in for an HTML5 vs. Flash-style codec competition in 2015.


           

給我老師的人工智慧教程打call!http://blog.csdn.net/jiangjunshow

這裡寫圖片描述