direct go
menu direct go
contents direct go
sub menu direct go

National AI Research InstituteMaking a Better Tomorrow

Organization

Introduction of Research

Media Research Division

  • Media Transmission Research Group Introduction
    The group develops broadcast based technology such as cable/wireless broadcasting transmission technology, network convergence broadcasting transmission technology, and social welfare and disaster alert broadcasting transmission technology in order to provide broadcast media with efficiency.

    Especially, the group researches various application technologies: wireless transmission area for ultra high quality UHD beyond 4K UHD and convergence broadcast service, cable transmission area making it possible to transmit uplink/downlink signal simultaneously for multi-giga data service, network convergence transmission area for a variety of additional data/application services, social welfare broadcasting for disadvantaged people such as a disabled person, and disaster alert broadcasting in case of emergency, etc.

    ㅇ Wireless broadcasting transmission technology

    The group researches the technology to improve the usage efficiency of broadcasting frequency such as single frequency network analysis technology, wireless transmission link technology for broadcasting media service, etc.
    In addition, the group develops broadcast based technology which provides most broadcasting service including UHD for next generation terrestrial broadcasting service, mobile HD, and radio simultaneously with efficiency. The group also carries out its international standardization.


    - Major Outcomes(~2016)
    ● Wireless broadcasting transmission technology and international standard(ATSC 3.0)
    ● Physical layer standard: LDM, LDPC, L1-FEC
    ● Single frequency network standard: TxID
    ● Terrestrial broadcasting transmission/reception system
    ● ATSC 3.0 transmission/reception module
    ● simultaneous UHD/HD broadcasting LDM transmitter/receiver
    ● LTDM transmission/reception system combining LDM and TDM
    ● NAB 2015 technology innovation award


    - Research under progress
    ● LDM based multiple antenna transmission technology for ultra high quality UHD and network convergence broadcasting service and its standardization
    ● Transmitter Identification(TxID) analysis technology for single frequency network of terrestrial UHD

    ㅇCable broadcasting transmission technology
    The group researches some core technologies: wideband channel based 10Gbps broadcasting infrastructure and full duplex transmission/reception technology for high quality and super realistic media service as a broadcasting media service transmission technology using wired network such as cable network, multi-platform convergence transmission system technology, and RoIP(Radio over Internet Protocol) technology which evolves RF based broadcasting network to IP based convergence transmission network.

    - Major Outcomes(~2016)
    ● Next generation cable transmission system
    ● Transmission experiment of 2 UHD programs in 1 channel(6MHz) in commercial cable network
    ● Introduction of next generation cable transmission technology(OFDM, LDPC, etc)
    ● UHD transmission system for massive contents
    ● Transmission of massive broadcast contents through combining broadcasting channels
    ● The world’s first UHD trial service via commercial broadcasting network
    ● UHDTV cable broadcasting transmission/reception standard(TTA)

    - Research under progress
    ● RoIP transmission technology for transmitting optical IP network based smart media and its international standardization
    ● Wideband channel based 10Gbps cable transmission technology, full duplex transmission/reception technology, and their international standardization

    ㅇNetwork convergence broadcasting transmission technology
    The group researches the technology providing various additional data/application services as broadcasting transmission/transport technology providing high quality and additional data service based on the convergence of heterogenous networks: ESG(Electronic Service Guide), App, home portal as well as high quality Audio/Video services through terrestial UHD broadcast and broadband networks.

    - Major Outcomes(~2016)
    ● Domestic terrestrial UHDTV broadcasting transmission/reception standard(TTA)
    ● Area: terrestrial UHDTV hybrid broadcasting service
    ● Service: dynamic linkage service
    ● Terrestrial broadcasting headend system
    ● ATSC 3.0 multiplexer/gateway
    ● Terrestrial UHDTV standard based scheduling and signaling server
    ● Terrestrial UHDTV broadcasting service receiving verification platform

    - Research under progress
    ● Terrestrial UHD broadcast based convergence platform and services

    ㅇSocial welfare and disaster alert transmission technology
    The group researches more convenient and effective intelligent social welfare and disaster alert broadcasting service technology making use of terrestrial UHD broadcast and broadband networks under smart media convergence environment as social welfare broadcast for disadvantaged people such as a disabled person, and an alarm and its reaction information delivery system technology in case of disaster.
    In addition, the group researches media/receiver linkage technologies for enlarging service category from TV receivers to smart receivers and from indoor to outdoor.

    - Major Outcomes(~2016)
    ● Emergency alert broadcasting system
    ● Low power wake-up transmitter/receiver
    ● Terrestrial/cable disaster message reception system
    ● Set-top-box control module for automatic channel switching

    - Research under progress
    ● Master antenna TV signal processing technology for enhancing UHD reception environment

    Abbreviations
    ATSC: Advanced Television System Committee
    FEC: Forward Error Correction
    LDM: Layered Division Multiplexing
    LDPC: Low-Density Parity Check
    LTDM: Layered Time Division Multiplexing
    NAB: National Association of Broadcasters
    OFDM: Orthogonal Frequency Division Multiplexing
    SFN: Single Frequency Network
    TDM: Time Division Multiplexing
    TTA: Telecommunications Technology Association
    TxID: Transmitter Identification
    UHD: Ultra High Definition


    Terrestrial UHDTV Based Broadcasting Service System Image <Terrestrial UHDTV Based Broadcasting Service System>

    Cable Delivery System for High Quality Superspection Media Services Image <Cable Delivery System for High Quality Superspection Media Services>

    UHD and convergence broadcast service Image <UHD and Convergence Broadcast Service>

    Terrestrial UHDTV Based Social Welfare and Disaster Alert Transmission Service Image <Terrestrial UHDTV Based Social Welfare and Disaster Alert Transmission Service>
  • Realistic AV Research Group Introduction
    Realistic AV Research Group aims to provide realistic and cost-effective video/audio services through development of various advanced technologies. The main research focuses of the group include video/audio coding, perceptual video quality assessment & enhancement, and multi-dimensional audio production/reproduction. The group also conducts studies on various application technologies such as audio signal based data transmission.

    ㅇ Video/Audio Coding
    To maximize the storage/transmission efficiency of various realistic video and audio signals, our group is developing cutting edge video/audio data compression technologies. The group further aims to standardize the developed technologies internationally through active participations in standardization meetings. The scope of the realistic video and audio signals the group is interested in include UHD, HDR, WCG, HFR, 360 VR videos and omni-directional audios.


    - Major Achievements(~2016)
    ● International standardization of video/audio coding technologies (MPEG)
    ● Video coding standard: HEVC, SHVC, RExt
    ● Audio coding standard: USAC, 3DA
    ● Development of video/audio Codec system
    ● HDR/WCG UHD supported real-time(60fps) HEVC Codec
    ● UHD supported real-time(60fps) SHVC encoder/decoder module
    ● Cloud based UHD Video Editor Solution
    ● USAC Codec for DAB based Digital Radio
    ● MPEG-H 3DA encoder module supporting 10.2 channel + 4 object

    - Ongoing Researches
    ● Development and international standardization of the next generation video coding technology that provides double compression ratio compared to HEVC
    ● Development and international standardization of next generation audio coding technology that provides double compression ratio compared to 3DA

    ㅇ Perceptual video quality assessment & enhancement
    Our group is conducting studies on automatic perceptual quality assessment and video restoration/enhancement with an aim to establish a framework that can maximize the perceptual quality of the video signals in various application scenarios.


    - Major Achievements(~2016)
    ● Multi-GPU based High-Quality HD-to-UHD Video Converter
    ● High Speed: Conversion speed of 60fps on 4 GPU cards
    ● High Quality: Quality degradation awareness of only 15% when compared to original UHD video


    - Ongoing Researches
    ● HDR video pre/post processing technology
    ● Development of perceptually optimal HDR Opto-Electrical Transfer Function (OETF)
    ● Studies on color space conversion method with minimal luma-chroma crosstalk
    ● Automatic perceptual video quality assessment technology
    ● 95% accuracy with respect to the actual subjective evaluation data


    ㅇ Multi-dimensional audio aquisition/production and reproduction
    Our group conducts researches on multi-dimensional audio aquisition/production and reproduction technologies for UHDTV broadcast, Digital Cinema, and VR/AR applications.

    - Major Achievements(~2016)
    ● Development of hybrid channel/object based audio contents acquisition/production and format technology
    ● 10.2 channel microphone
    ● Authoring tool for multi-channel/object based audio
    ● Standardization on hybrid audio file format
    ● Channel+object based audio rendering and listening environment adaptive reproduction technology for UHDTV broadcast and digital cinema application
    ● Cinema audio processor technology
    ● Speaker array based sound bar technology
    ● Binaural rendering technology for headphones
    ● Listening environment adaptive reproduction technology
    - Ongoing Researches
    ● Sound field analysis / high-dimensional audio conversion technology
    ● High dimensional audio conversion and synthesis through sound field analysis and source separation


    ㅇ Other application technologies
    ● Development of acoustic channel based supplementary information transmission and retrieval technology for broadcast contents identification and advertisement monitoring applications

    ● Acoustic signal based place/situation recognition technology for supplementing the auditory abilities of elderly and disabled

    (Abbreviations)
    UHD: Ultra High Definition, HDR: High Dynamic Range, WCG: Wide Color Gamut, HFR: High Frame Rate, UHQ: Ultra High Quality, HEVC: High Efficiency Video Coding, SHVC: Scalable Extension of HEVC, RExt: Range Extension of HEVC, USAC: Unified Speech and Audio Coding, 3DA: 3D Audio

    UHD Real-Time Encoding System Image <UHD Real-Time Encoding System>

    High Quality  Conversion Technology Concept Image <High Quality Conversion Technology Concept>

    Automatic perceptual video quality assessment  Image <Automatic Perceptual Video Quality Assessment >

    Multi-Dimensional Audio Aquisition/Production and Reproduction Image <Multi-Dimensional Audio Aquisition/Production and Reproduction>

    Knowledge of Sound Channel-Based Information Transfer and Search Technology Concepts Image <Knowledge of Sound Channel-Based Information Transfer and Search Technology Concepts>

    Sound-Based Location/Situational Cognition Technology Concept Image <Sound-Based Location/Situational Cognition Technology Concept>
  • Tera-media Research Group
    Wide Field of View Video
    - Wide field of view video technology maximizes the sense of reality by expanding the limited viewing angle of established display devices, which only cover 60 degrees.
    UWV (Ultra Wide Vision) is a panorama display technology that supports a viewing angle of over 120 degrees. To apply UWV in broadcasting services and provide in real time, we are developing multiple technologies including active multi-camera rig, real-time stitching, HEVC based video encoding and transmission, etc. Furthermore, we will apply above technologies to 2018 PyeongChang Winter Olympic live broadcasting pilot service. Furthermore, we also extend the abovementioned UWV technologies to provide 360 degrees VR services based on actual videos.

    Complete 3D Video
    - To provide complete 3D image/video in diverse platforms, we are developing core technologies for Light Field (LF), super multi-view and omnidirectional 360 degree image/video. Such technology contains acquisition of 3D contents via LF or omnidirectional camera, extraction of 3D geometrical information, dynamic LF image synthesis/reproduction in various terminal environments and 3D stereoscopic image formatting/compression for transmission.

    User-Friendly Emotional UI/UX
    - Our primary aim is to achieve realistic and emotional user-friendly broadcasting services by providing high quality audio-visual information as well as additional olfactory or tactile stimuli. Furthermore, we are developing multiple assistive technologies, which can improve the media accessibility of the audio-visually

     Image <Wide Field of View Video>

     Image <Complete 3D Video>

     Image <User-Friendly Emotional UI/UX >
  • Smart Media Research Group
    ① Smart Broadcasting Platform Technology
    With the development of smart media, the needs of operators to create new businesses such as advertising or content recommendations by using broadcast content metadata is increasing. In addition, the needs of users for the reconstruction of the scene for broadcast content or story-based consumption patterns of consumers are soaring. To address these requirements, we are developing new technology based smart broadcasting platforms.

    Major R&D Fields
    We are developing a technique to automatically combine scene based meta-data that is generated automatically by machine, not by personnel.
    - Developing technology of creating scene automatically
    - Developing technology of generating convergence-type meta-data automatically
    - Developing technology of smart media base server


    Interactive media creation platform technology
    The user requires a way to create his/her own video contents. However, it demands well-organized scenario, high-priced equipments, and cinematographers in the ordinary filmmaking process. A large volume of video clips that user can imagine are already produced and existed on the Internet. Thus, these video contents can allow users to generate their own contents easily. We are studying techniques for interactive media creation platform to analyze and re-organize existing video contents according to scenario given by users.
    [Fields of Research and development]
    To support generating a new content by using ordinary contents, We are developing a video content segmentation and meta-data annotation to carry out exact video clips for specific semantics. Based on segmented and annotated contents, personalized video creation can be realized with user scenario analysis, video retrieval, and video re-organization techniques.
    - Unstructured user scenario analysis
    - Video segmentation and meta-data annotation
    - Video retrieval and reorganizing based on user scenario


    Object-oriented media creation and service technology
    We are developing a multi-camera video service that allows viewers to select a desired screen from a one-way broadcast centered on existing broadcast content creators. In addition, we are developing a multi-camera video service that can detect and identify objects in video and track them for public services required for crime and disaster prevention. In addition, we are also developing automatic event base video creation technology using broadcast context information, such as character broadcasting and camera panning information, which can reduce the work time of the event base video creation technology that is dependent on manual operation.

    [Fields of Research and development]
    As the age of artificial intelligence comes, we are studying object recognition technology in video, multi-angle figure object recognition and tracking technology in video, event information creation technology in video, and artificial intelligence based video processing technology specialized for multi camera videos.
     - Multiple camera videos processing technology, deep learning based multi-angle face detection and identification technology
     - Multiple camera integration control technology based on multi-object tracking
     - Deep learning based event information extraction and event base segment video creation technology


    Context-aware based Space-integrated Digilog Signage Platform Technology
    Smart signage technology is a technology that conveys and expresses desired multimedia contents through a remote indoor / outdoor screen.
    In addition to digital displays, we are developing technologies that construct screen environments by using general objects as screens, and offer advertisements, media arts, and public information to users by an interactive way.

    Research and Development Contents
    We are developing a space-integrated Digilog signage technology that constructs a screen by recognizing various types of objects and environments, and calibrates contents according to perceived object screen information. The main technologies are:
    - Object screen modeling and configuration
    - Content correction and generation for object screen
    - Contents distribution and reproduction based on object screen composition
    - Object screen-based interaction service


    Immersive I-centric space media technology using see-through HMD
    Along with the development of wearable / portable display terminals and various sensor technologies, VR / AR technology is becoming a major item of next generation IT and there is a growing demand for intelligent services that are more realistic on the user side. We are studying intelligent mixed reality space technology that can provide realistic virtual objects based on real space using HMD and control or manipulate actual objects in real space through interaction with virtual objects.

    [Fields of Research and development]
    The virtual object created based on a real object is visualized at specific location, and users can interact with real object through the virtual object. Especially, we are developing realistic telepresence technology that visualizes remote space person in real time base on mixed reality
    - Real time 3D (Human)Object reconstruction using multiple sensors
    - Object visualization using see-through HMD based on mixed reality
    - Virtual/Real Object Interworking in the Smartspace
    - Palm UI technology for the HMD user


    Deep-learning based Disaster Information Platform Technology using UAV image and IoT sensor information
    We research technologies based on real-time images and sensing data from unmanned aerial vehicle (UAV), which performs surveillance and prediction for disasters such as forest fires, local flood and landslide, supports situation recognition and response for a disaster area immediately and precisely, and transfers disaster information. This research aims to secure essential technologies for precaution-centric proactive disaster managements and for a construction of disaster-safety communication networks.

    Research Contents
    Based on real-time processing and analysis for multiple data acquired from multi-component sensors mounted on UAV, we have researched supporting technology for recognition, prediction and situational response about disasters such as forest fire, local flood and landslide, and broadcasting service technology for public disaster information linked with an integrated alarm system.
    - UAV-mounted multi-component sensor management technology
    - Real-time image processing technology for disaster big-data
    - Disaster prediction and situational response technology
    - Standardization related to UAV-based disaster data collection and processing
    - Disaster information transfer technology


     Image <Smart Broadcasting Platform Technology>

     Image <Interactive media creation platform technology>

     Image <Object-oriented media creation and service technology>

     Image <Context-aware based Space-integrated Digilog Signage Platform Technology>

     Image <Immersive I-centric space media technology using see-through HMD>

     Image <Deep-learning based Disaster Information Platform Technology using UAV image and IoT sensor information>
TOP