The TV Studio Becomes a System
Change is sweeping across the video production studio. High-definition (HD), ultra-high-definition (UHD), video over coaxial cable (or “coax”), video over Ethernet, digital post-production: the gusts of change are relentless. And while the basic functions of the studio remain unchanged since the days when Walter Cronkite first turned his reassuring yet saturnine gaze toward a camera, the way these functions are implemented and the architectures in which they reside are all in directed turmoil.
In the next few years the now-digital studio can expect major architectural changes. The new 4K UHD (3840 x 2160 pixels) and 8K UHD (7680 x 4320) formats will appear—initially for production, but later for distribution as well. Transport within the studio will shift from bundles of Serial Digital Interface (SDI) coax cables to hybrid networks and further, into unified Ethernets. And specialized video equipment vendors will increasingly find themselves becoming software application vendors, as servers encroach on a world that had been populated by special-purpose hardware.
The Basic Functions
The tasks that go on in a video studio haven’t really changed that much since the days of the first commercial broadcast. The studio creates content. It may also import content from outside. It edits content, both live and off-line, in post-production. It stores content. And it delivers content to clients. That about sums it up.
In the beginning, before video tape, the process had an elegant simplicity. Programs happened live, on a sound stage. Analog video from the cameras went, via coax, to a mixing panel, where technicians switched between camera signals to compose a single video output on the fly. That output went past a monitoring panel—often simply a technician watching waveforms on a synchroscope—directly to the transmitter, either via a cable to the roof or via a microwave link to a nearby hilltop.
Ampex’s 1956 introduction of massive video-tape recorders (Figure 1) changed everything. Now the live program could be recorded, and the great spools of tape carried down to a vault in the basement for storage, to be brought out again for delayed broadcast. Just as important, a post-production team could replay the tape through an editing console, cutting, inserting, and dubbing to polish the program. TV production was suddenly not live theater: it was more like movie production, with the ability to do multiple takes and edit together a final program off-line. Local studios could receive pre-recorded programs via couriered tape or via microwave links, edit in their own commercials, and broadcast the finished product. Now all the basic functions—receiving, capturing, editing, storage, and transmitting, existed. From here on out, only the performance and implementation would change.
E-mail This Article | Printer-Friendly Page |
|
Intel FPGA Hot IP
Related Articles
- System-in-package becomes SoC alternative
- Why Transceiver-Rich FPGAs Are Suitable for Vehicle Infotainment System Designs
- Emerging Trends and Challenges in Embedded System Design
- Arteris System IP Meets Arm Processor IP
- System on Modules (SOM) and its end-to-end verification using Test Automation framework
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)