What is VVC (H.266)?

October 25, 2024
7 Min
Video Education
Jump to
Share
This is some text inside of a div block.

Versatile Video Coding (VVC), or H.266, is the latest breakthrough in video compression technology, designed to cut bitrates by up to 50% without sacrificing quality. For developers working on video streaming platforms, VVC offers significant advantages in reducing bandwidth consumption and optimizing 4K, 8K, and VR content delivery. Its advanced compression algorithms are highly efficient, but integrating VVC into real-world streaming workflows introduces technical challenges, from encoding complexities to device compatibility. As video demand grows, understanding VVC's capabilities and implementation hurdles will be key for delivering the next generation of video experiences.

What is VVC or H.266?

Core VVC technical features developers should know

  1. Block based coding

VVC enhances traditional block-based coding by introducing non-square block sizes. This flexibility allows the codec to more efficiently handle complex visual data, such as detailed backgrounds or computer-generated graphics. In contrast to square-only blocks in previous codecs, VVC can dynamically adapt the block size to match the complexity of the scene.

When dealing with high-resolution content like 4K/8K video or dense visual effects. It enables developers to achieve better compression without losing quality in intricate areas, reducing file size and bandwidth usage while maintaining visual fidelity.

  1. Intra block copy

The Intra Block Copy feature allows VVC to reuse data from within the same frame, which drastically reduces the need for redundant encoding of static or repetitive content. This is especially beneficial for videos with recurring elements or little movement, like gaming environments, virtual sets, or news broadcasts.

For developers working on low-motion content or content with repetitive patterns (e.g., esports streams or static camera angles), this feature can significantly reduce the computational complexity and encoding time. It results in faster processing and lower server loads without sacrificing quality.

  1. Multi-dimensional motion compensation

VVC’s multi-dimensional motion compensation (MDMC) enhances traditional motion compensation by supporting more complex motion patterns like scaling, rotation, and perspective changes. It enables smoother handling of dynamic scenes, such as fast camera pans or moving objects.

Fast-paced content like live sports, action scenes, or VR applications will benefit from MDMC, as it improves video quality in scenes with complex motion. By minimizing artifacts and preserving sharpness during high-motion sequences, MDMC helps ensure a smooth viewing experience, even at lower bitrates.

Why it matters for video developers

These VVC features allow developers to encode high-resolution content more efficiently, reducing bandwidth consumption without compromising on quality. This directly impacts the server load, bandwidth costs, and overall streaming performance. By optimizing how visual data is encoded, developers can deliver seamless streaming experiences, even under constraints like limited network capacity or high traffic.

Comparing H.264 vs H.265 vs H.266

Each codec improves compression, processing demands, and support for higher resolutions. While H.264 is still widely used, H.265 and H.266 offer better performance for 4K, 8K, and beyond. However, they come with their own challenges. The table below compares these codecs based on key features like compression, supported resolutions, and adoption.

Comparing H.264 vs H.265 vs H.266
Feature H.264 (AVC) H.265 (HEVC) H.266 (VVC)
Release Year 2003 2013 2020
Compression efficiency Provides a moderate compression improvement over previous standards, reducing bitrate by about 50% compared to MPEG-2. Offers up to 50% bitrate reduction compared to H.264 for the same quality, supporting resolutions up to 8K. Achieves up to 50% bitrate reduction compared to H.265, with improved performance for 4K and 8K content.
Supported resolutions Up to 4K (40962304) Up to 8K (81924320) Designed for 8K and beyond, including HDR and 360-degree video.
Computational complexity Lower complexity, making it easier for real-time encoding and decoding on various devices. More complex than H.264, requiring more processing power for encoding and decoding. Significantly more complex than both H.264 and H.265, requiring up to 10x more computational resources for encoding and about 1.5x for decoding.
Use cases Widely used across various platforms, including streaming services, mobile devices, and Blu-ray discs. Commonly adopted for streaming services like Netflix and YouTube due to its efficiency with high-resolution content. Targeted at high-resolution streaming, virtual reality, and applications requiring efficient bandwidth usage for ultra-high-definition content.
Adoption and support Extensive support across devices and platforms; well-established in the industry. Gaining traction but has faced challenges due to licensing complexity; supported by many modern devices. Still in early adoption stages; limited hardware support currently available but expected to grow as new chips are developed.

Real-time streaming and performance considerations

Parallel processing

VVC supports parallel processing, which allows different regions of a video frame to be encoded or decoded simultaneously. This capability is crucial for minimizing latency in real-time streaming, especially in scenarios like live sports, gaming, or virtual events where every millisecond counts.

Building live streaming platforms or real-time applications, parallel processing can help achieve faster encoding and decoding speeds, directly reducing latency. By processing video frames in parallel, developers can maintain smooth, real-time experiences even when streaming in 4K or higher resolutions.

Hardware acceleration

By utilizing hardware acceleration whether through GPUs, ASICs (Application-Specific Integrated Circuits), or FPGAs (Field-Programmable Gate Arrays) developers can offload much of the computational workload from the CPU to specialized hardware. This results in faster VVC encoding and decoding, which is essential for real-time applications like video conferencing, virtual reality, and live broadcasts.

Integrating hardware acceleration allows to scale their real-time streaming applications efficiently. For instance, encoding high-resolution content like live sports events or immersive VR environments becomes more feasible without overwhelming CPU resources, leading to better performance, lower power consumption, and improved user experience.

Using parallel processing and hardware acceleration can drastically improve real-time performance, enabling smoother streaming and minimizing latency. By reducing the strain on servers and client devices, these technologies allow developers to deliver seamless, high-quality experiences in applications where timing is critical whether it's low-latency video conferencing or interactive gaming.

VVC vs. AV1: Key differences

VVC vs. AV1 by compression

For high-resolution video streams like 4K or 8K, VVC excels in compression efficiency, delivering better video quality at lower bitrates. AV1, on the other hand, is optimized for low-bitrate scenarios, making it a strong contender for bandwidth-constrained applications or platforms that prioritize network efficiency over resolution.

Consideration:

If you're building applications where high-quality, high-resolution streaming is critical—such as OTT platforms or live sports broadcasting VVC's compression efficiency will help minimize bandwidth usage without sacrificing video quality. For lower-bitrate environments (e.g., mobile streaming, social media), AV1 may offer better performance with fewer bandwidth requirements.

Computational complexity

VVC’s advanced compression techniques, while offering superior efficiency, require more computational power during encoding and decoding. This can become a consideration in low-latency environments or on devices with limited processing power. In contrast, AV1 is less computationally demanding, making it more suitable for applications where latency or hardware resources are constrained.

Consideration:

In real-time streaming or edge computing scenarios (e.g., video conferencing, cloud gaming), where every millisecond and resource matters, AV1’s lower computational complexity may help you avoid bottlenecks. However, if performance and compression efficiency are top priorities, VVC can deliver better results, especially for high-quality content.

Licensing costs

One key difference is that VVC comes with licensing fees due to patents, which can increase costs depending on your scale and use case. AV1, being open-source and royalty-free, is a cost-effective alternative, particularly for developers working on budget-conscious projects or open platforms.

Consideration:

If you’re deciding between VVC and AV1, consider the trade-off between licensing costs and performance. VVC offers greater efficiency, but at a financial cost. If budget or licensing is a concern, AV1 provides a free alternative without sacrificing too much quality, especially for lower resolutions and bitrates.

Decision guide for VVC and AV1

  • Choose VVC if your project demands high-resolution, high-quality video and you need maximum compression efficiency. It's ideal for large-scale streaming applications where bandwidth costs are significant.
  • Opt for AV1 when licensing fees or computational complexity are constraints, and when you're targeting lower bitrates or need to run on devices with limited processing power.

VVC for adaptive bitrate streaming (ABR)

Enhanced compression for lower bitrates

With VVC's advanced compression algorithms, developers can encode high-quality video streams at significantly lower bitrates. This enables smoother streaming experiences, even when bandwidth fluctuates. In ABR, this means you can offer higher resolution streams without overwhelming the network, reducing buffering and providing consistent quality across different connection speeds.

Multi-bitrate encoding and real-time switching

VVC supports multi-bitrate encoding, allowing multiple stream versions at different bitrates. When combined with real-time stream switching, it optimizes ABR performance in low-bandwidth conditions. As a user’s network conditions change, VVC helps deliver the highest possible video quality by seamlessly switching between bitrates without causing noticeable disruptions in playback.

Key Feature for Developers

For developers implementing ABR, VVC's compression efficiency can improve profile optimization, helping you reduce buffering while delivering high-quality streams to a variety of devices, from smartphones to 4K TVs. This not only enhances the viewing experience but also helps control server load and bandwidth costs, making your streaming solution more scalable.

How to implement VVC in streaming workflows

Transcoding with VVC

To integrate VVC into your streaming workflow, the first step is transcoding your source video into the VVC codec. Use transcoding tools or libraries that support VVC, such as FFmpeg or custom encoding pipelines optimized for H.266. During transcoding, set parameters like bitrate, resolution, and GOP (Group of Pictures) structure to fit your specific streaming requirements, whether it's for high-resolution 4K/8K content or mobile-optimized streams.

Segmenting for HLS/DASH

Once transcoded, the next step is to segment the video. Both HLS and DASH protocols support VVC-encoded content, but segmentation is essential for adaptive bitrate (ABR) streaming. For HLS, segment the video into .ts or .fmp4 chunks; for DASH, use fragmented MP4 (.m4s) segments. Tools like Bento4 or Shaka Packager can handle this process efficiently, ensuring your streams are ready for ABR delivery.

Creating manifest files

After segmentation, you need to generate the manifest files playlist.m3u8 for HLS and manifest.mpd for DASH. These files describe the different bitrate versions of your video and allow the player to switch between them dynamically based on network conditions. Tools like MediaConvert or custom packaging scripts can generate these manifests, embedding VVC stream information to support seamless playback.

Practical example for ABR formats

  1. Transcode your source video using VVC with tools like FFmpeg:

bash

ffmpeg -i input.mp4 -c:v vvc -b:v 3000k output_vvc.mp42.	
  1. Segment the video using Shaka Packager for DASH:
packager \in=input_vvc.mp4,stream=video,output=output_vvc.m4s \--mpd_output manifest.mpd

3. Create ABR streams by transcoding the same video at multiple bitrates and segmenting them. Then, update the manifest files to include these different streams.

Tip: VVC’s compression efficiency allows you to pack higher-quality streams at lower bitrates. When setting up ABR profiles, leverage this by offering more bitrate variants without increasing storage or bandwidth costs. This ensures optimal video delivery across a range of devices and network conditions.

Hardware acceleration for VVC

To maximize the efficiency of VVC encoding and decoding, use hardware acceleration is key. Many modern GPUs, ASICs, and FPGAs offer native support for VVC, significantly reducing the computational load on CPUs. For example, NVIDIA’s NVENC supports VVC encoding on its latest GPUs, allowing developers to offload video processing tasks and achieve faster encoding times. Similarly, Apple’s VideoToolbox enables hardware-accelerated VVC integration on macOS and iOS devices, making it a great option for applications targeting Apple platforms.

If you’re deploying VVC in real-time scenarios, such as live streaming or video conferencing, hardware acceleration can drastically reduce latency and improve overall performance. Tools like Intel’s Quick Sync Video and AMD’s VCN (Video Core Next) also provide hardware-accelerated solutions for VVC, especially in high-performance environments.

Challenges and best practices for developers

While VVC offers remarkable compression efficiency and performance improvements, implementing it in real-world streaming workflows comes with technical challenges that developers must address for optimal results.

  1. Encoding artifacts and compression tuning

One of the most common challenges developers face with VVC is encoding artifacts, such as blockiness, banding, or color shifts. These issues typically arise from misconfigured compression settings, such as overly aggressive bitrate reduction or improper Group of Pictures (GOP) structure. To mitigate these problems:

  • Bitrate allocation: VVC’s advanced features enable greater compression at lower bitrates, but fine-tuning is crucial. Developers should carefully balance bitrate allocation based on the content’s complexity. For static scenes, lower bitrates may suffice, but dynamic, high-motion scenes require higher bitrates to maintain visual quality.
  • GOP structure: VVC allows for flexible GOP configurations, including variable GOP lengths. For content with frequent scene changes, use adaptive GOP settings, which dynamically adjust the frame order and compression ratios to optimize encoding quality. In contrast, fixed GOP structures may suffice for simpler or more predictable content, such as news streams or static camera feeds.•
  • CTU sizes: VVC uses Coding Tree Units (CTUs), with non-square partitioning options that allow more flexibility compared to HEVC. Fine-tuning CTU sizes can also help reduce artifacts. Developers should experiment with smaller CTUs for complex, detailed scenes and larger CTUs for more uniform content.
  1. Latency in real-time streaming

Latency is a critical consideration in real-time streaming applications like gaming, live sports, or virtual events. While VVC offers exceptional compression efficiency, the encoding complexity can introduce delays if not managed properly. To minimize latency, developers should:

  • Hardware acceleration: Implementing GPU-based or ASIC-based hardware acceleration is essential for real-time applications. Tools such as NVENC (for NVIDIA GPUs) or VideoToolbox (for Apple devices) can significantly reduce the load on the CPU, speeding up encoding and decoding processes. Developers working in cloud environments should explore FPGA-based accelerators, like Intel’s Agilex FPGAs, which are optimized for video encoding workloads.
  • Parallel processing: VVC supports tile-based parallel processing, where the video is divided into independently decodable tiles. Developers should take advantage of this feature by enabling multi-threaded processing across multiple CPU cores or GPUs to reduce encoding time and overall stream latency. For live streaming, use low-latency encoding modes, which prioritize speed over compression efficiency to reduce delay.
  • Low-latency ABR streaming: In adaptive bitrate (ABR) streaming, implementing chunked encoding can help reduce latency. Instead of waiting for a full segment to be encoded, smaller chunks are sent to the player, reducing the overall startup and switching delay.
  1. Backward compatibility and fallback strategies

With VVC being relatively new, many devices and platforms may not yet support it natively. Developers need to ensure backward compatibility for users on older systems, which might only support codecs like HEVC or AV1. To manage this, consider the following approaches:

  • Multi-codec workflows: Implement a multi-codec workflow where the same content is transcoded into multiple formats, such as VVC, HEVC, and AV1. Use manifest files that reference all available codecs, allowing players to dynamically select the best codec based on device compatibility and network conditions.
  • Fallback manifests: For adaptive streaming formats like HLS or DASH, utilize fallback manifests that list multiple codec options. This ensures that devices incapable of decoding VVC can still receive an HEVC or AV1 stream, preventing playback failures. Tools like ffmpeg can be used to automatically generate these manifests during the packaging phase.
  • Codec negotiation: Develop a codec negotiation layer within your media player. This layer would inspect the client’s capabilities (via Media Capabilities API) and automatically select the most efficient codec supported by the device.
  1. Bandwidth considerations and network variability

Despite VVC’s efficiency, network conditions may fluctuate during live streams, leading to potential buffering or degradation in quality. Developers can mitigate this by:

  • Adaptive bitrate profiles: VVC excels in adaptive bitrate (ABR) streaming due to its ability to maintain high video quality at lower bitrates. Developers should carefully craft ABR profiles to account for varying network speeds, allowing smooth stream transitions without significant quality loss. Use real-time bitrate adjustment algorithms to dynamically switch between high and low bitrate profiles based on available bandwidth.
  • Buffer management: Fine-tune buffer settings to handle sudden network fluctuations. Set up aggressive buffer filling at the start of the stream to avoid buffer underruns, and implement graceful degradation strategies that reduce video resolution or frame rate in case of severe bandwidth drops, without interrupting the stream.

Conclusion

Versatile Video Coding (VVC or H.266) represents a transformative leap in video compression technology, offering unparalleled efficiency and performance improvements for high-resolution streaming. With its advanced features like block-based coding, intra block copy, and multi-dimensional motion compensation, VVC enables developers to significantly reduce bandwidth while maintaining video quality, especially for 4K, 8K, and VR content. Despite its technical complexity, including higher computational requirements and early-stage adoption challenges, VVC provides a powerful solution for developers aiming to deliver seamless, high-quality video experiences across various platforms.

When it comes to real-world implementation, FastPix provides the tools and infrastructure to support VVC integration seamlessly. With advanced transcoding pipelines, optimized for H.266, FastPix helps developers encode and stream high-resolution content with minimal latency and reduced bandwidth costs. By leveraging hardware acceleration and adaptive bitrate streaming (ABR) workflows, FastPix ensures smooth and scalable video delivery, enabling you to fully harness the benefits of VVC while managing network variability and device compatibility efficiently.

FAQs

What is VVC and how does it differ from previous codecs?

VVC, or Versatile Video Coding (H.266), is a video compression standard that offers up to 50% better compression efficiency compared to HEVC (H.265) at the same visual quality. It incorporates advanced coding tools and techniques that enhance video quality while reducing file sizes, making it ideal for high-resolution content delivery.

What are the main benefits of using VVC for streaming applications?

VVC provides significant benefits for streaming, including lower bitrate requirements for high-quality video, support for resolutions up to 16K, and enhanced features like adaptive streaming and ultra-low latency. These advantages help improve user experience by reducing buffering and maintaining video quality across varying network conditions.

How can developers integrate VVC into existing workflows?

Developers can integrate VVC by utilizing open-source encoders like VVenC and decoders like VVdeC. These tools can be incorporated into existing video processing pipelines, allowing for seamless encoding and decoding of VVC streams. Additionally, integrating with frameworks like FFmpeg can facilitate broader compatibility and functionality.

What are the encoding options available in VVC?

VVC supports various encoding options, including single-pass and two-pass rate control modes, perceptual optimization techniques, and multiple quality/speed presets. Developers can choose settings based on their specific needs, such as prioritizing speed or visual quality.

How does VVC support adaptive bitrate streaming?

VVC enhances adaptive bitrate streaming by allowing multiple bitrate versions of a video to be encoded efficiently. It supports dynamic quality adjustments based on real-time network conditions, enabling seamless switching between different resolutions and bitrates without interrupting playback.

What are the hardware requirements for using VVC?

While VVC offers significant advantages in compression efficiency, it also requires more computational resources than older codecs like AVC or HEVC. Developers should ensure their encoding and decoding hardware is capable of handling the increased complexity associated with VVC.

Get Started

Enjoyed reading? You might also like

Try FastPix today!

FastPix grows with you – from startups to growth stage and beyond.