WO2006110975A1 - Multimedia system for mobile client platforms - Google Patents

Multimedia system for mobile client platforms Download PDF

Info

Publication number
WO2006110975A1
WO2006110975A1 PCT/CA2005/000617 CA2005000617W WO2006110975A1 WO 2006110975 A1 WO2006110975 A1 WO 2006110975A1 CA 2005000617 W CA2005000617 W CA 2005000617W WO 2006110975 A1 WO2006110975 A1 WO 2006110975A1
Authority
WO
WIPO (PCT)
Prior art keywords
multimedia
audio
objects
multimedia object
video
Prior art date
Application number
PCT/CA2005/000617
Other languages
French (fr)
Other versions
WO2006110975A8 (en
Inventor
Mark Krebs
Original Assignee
Logovision Wireless Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Logovision Wireless Inc. filed Critical Logovision Wireless Inc.
Priority to PCT/CA2005/000617 priority Critical patent/WO2006110975A1/en
Publication of WO2006110975A1 publication Critical patent/WO2006110975A1/en
Publication of WO2006110975A8 publication Critical patent/WO2006110975A8/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2225Local VOD servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234318Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/64322IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8355Generation of protective data, e.g. certificates involving usage data, e.g. number of copies or viewings allowed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/189Arrangements for providing special services to substations for broadcast or conference, e.g. multicast in combination with wireless systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS

Definitions

  • the invention relates to issues of the wireless Internet, specifically to methods of multimedia transmission and playback for mobile clients.
  • the current methods of distributing multimedia data over the wireless Internet to mobile clients are constrained by existing wireless bandwidth, and the real-time decoding, processing and displaying of multimedia content with limited hardware capabilities. These hardware limitations include slow CPUs, high memory latencies, slow drawing capabilities and the absence of YUV to RGB conversion in the hardware.
  • embedded media players several cell phone handsets and handheld computers can play either streamed video or audio.
  • Popular digital video encoding standards for some handsets are H263 and MPEG4.
  • the audio codecs, MP3, AMR and AAC, are also typically supported on some mobile handhelds.
  • Newer video codecs, like H264 could be used for video transmission to cell phones, but would require clients systems with fast memory access for their motion compensation methods.
  • Embedded streaming media players rely on firmware integration to take advantage of the multitasking capabilities of cell phone handsets. At the time of this writing, most cell phones cannot support multimedia playback because they are only capable of supporting one or a few concurrent processing threads.
  • video is also limited to very low frame rates and the bandwidth available for streaming in North America is low, varying from 2-3kybtes/second to ISDN speeds of 64Kbits.
  • European countries and Japan currently offer 3G network connection speeds, varying from 64kbits-300Kbits, and offer more technologically advanced cell phones with embedded media players that can achieve higher video frame rates. For limited per usage periods and some, EV-DO (Evolution Data Optimized) networks can also provide these higher speeds over local CDMA networks.
  • EV-DO Evolution Data Optimized
  • MPEG4 software players depend on PC multitasking or hardware APIs for efficient processing to draw video frames.
  • J2ME device independent Java
  • CLDC Connected Limited Device Configuration
  • CDC Connected Device Configuration
  • the second configuration Mobile Information Device Profile (MIDP), runs on top of the CLDC, and several profiles run on top of CDC.
  • MIDP Mobile Information Device Profile
  • Java players for cell phones like the Oplayo MVQ player exist, but implementations of true, platform independent, MPEG4 Java decoders that will play video on cell phones are not known.
  • More efficient methods such as U.S. Pat. No. 5,699,121, do not rely on DCT motion compensation, and propose pattern marching to identify regions in the motion residual signal that have not been accurately reproduced and to correct them using a pattern library. But again, this approach does not use MPEG4 video encoding.
  • Bit streaming is the standard method of transmitting audio or video to cell phones over wireless networks. Streamed bits are buffered and then decoded, or entire video files are downloaded or proportionately cached, or as in progressive, http streaming downloading to a point where complete, continuous playback is deemed possible.
  • bit streaming in terms of audio/video content is usually is done over a non-reliable transport like UDP and requires a lot of error correction and duplication of content (extra stream correction data).
  • Alternatives to streaming methods have been proposed for the transmission of video as objects through pre-fetched lists Waese et al., US Pat. No. 6,286,03 l.which are similar to downloading pre-fetched lists of SMIL objects, and instant or scheduled notification file downloading Stumm, US Pat. No. 5,768,528.
  • these do not address specific continuity and deployment issues for wireless multimedia transmission and concurrent playback on a limited-tasking cell phone handsets.
  • Streaming also requires client processing for significant error correction in video decoding, adaptive encoding for varying channel bitrates and in cellular networks. It also requires the use of cellular MMS multimedia protocol.
  • the object of the current invention is to solve one or more of the drawbacks in existing methods discussed above, or to provide other improvements to the art.
  • the invention relates to wireless Internet multimedia transmission and wireless clients.
  • the invention provides a method of efficient multimedia object creation.
  • the invention deployment addresses the limitations of large-scale multimedia transmission on cellular networks to wireless clients.
  • the invention relates to methods of decoding of video, sufficiently optimized to be played on a limited wireless client.
  • the invention relates to methods of decoding of audio, sufficiently optimized to be played on a limited wireless client.
  • the limited mobile handset multimedia object player for both MPEG4 video decoding and AAC audio decoding, is implemented as a device- independent Java (J2ME) applet.
  • J2ME device- independent Java
  • the invention pertains to efficiently transmittable multimedia object creation.
  • a server-based transcoder coupled with a multimedia object creator, inputs a standard analog signal or alternative digital signal like MPEG2, and converts this signal into true MPEG4/AAC multimedia objects.
  • multimedia objects they can then be dynamically uploaded to multiple live content hosting web servers, which, through proximate mobile network proxy servers, make live content accessible to mobile clients as consecutive multimedia objects.
  • the multimedia object creator produces discrete multimedia objects from video and audio segments of a continuous stream. If the stream is MPEG4, multimedia objects can also be segments of multiple component video and audio streams. In the case of multiple MPEG4 component streams, per object segmentation and decoding can enable the composition of a single scene from several temporally-independent multimedia objects. This provides the possibility of decoding only a limited number of multimedia objects, and not all objects, to provide an object-based scalability.
  • Multimedia objects are discrete, and also have distinctive Internet addresses, and hence, the mobile client will have the opportunity to interact with any given media sequence on a per object basis.
  • a window of multimedia objects is made available on the host server This window would be comprised of a number of recently created multimedia objects.
  • a larger multimedia object window can be dynamically created on the host server.
  • each object is numerically ordered.
  • the transport mechanism for the multimedia objects is assumed to be HTTP for the purposes of illustration, however, other protocols which access content through file and directory structures could be used. For example FTP, IMAP4 and NNTP all have the capability to serve files in a directory structure.
  • the number of multimedia objects that can be buffered in memory is based on the size of the first multimedia object.
  • the amount of free memory available and the processing of further multimedia objects in the sequence can be optional and dependent on whether the implementation allows the modification of object parameters between multimedia objects (such as the size of the visual frame or the sample rate of the audio stream).
  • the buffering and playback of multimedia objects in a synchronized fashion is critical to fluid playback.
  • the HTTP 1.1 protocol and some implementations of HTTP 1.0 allow the use of a persistent connection over TCP to perform multiple requests.
  • some HTTP 1.1 implementations allow the use of pipelined connections allowing the HTTP client to perform many requests in rapid succession decreasing the latency between the request and reception of each multimedia object. When possible, the invention can take advantage of this capability.
  • the invention pertains to multimedia object deployment to large numbers widely distributed wireless Internet clients.
  • Media content converted to multimedia objects must be available to many users and the distribution system must be sufficiently robust to allow peaks in demand and have sufficient geographic proximity that network congestion and latency are reduced.
  • the transcoding of input media formats and the creation multimedia objects is done in real-time and immediately deployed to every content server of the distributed system.
  • These content servers may be at the same location, or they may be geographically placed to support local mobile clients and take advantage of alternative mobile network proxy servers and proxy server object caching.
  • the distribution of multimedia objects to alternative content servers can take place on the wired Internet.
  • the invention provides novel optimizations for digital video decoding.
  • Some of these optimizations can then be used by an expert assessment process, whereby, the decoder maintains a state information list of short-cuts, related to perceived frame rate in a sorted list starting with those that will decrease output quality the least, to those that will decrease output quality the most but have the most impact on decoding speed.
  • the client player-decoder dynamically adjusts how many shortcuts must be taken.
  • These short-cuts are specifically designed to drastically reduce the number of computations necessary at certain critical steps in the video decoding process at the cost video output quality. This allows the video decoder to scale in complexity based on the processing power of the device being used. It also allows users to experience multimedia playback despite the limitations of the device they may be using.
  • the invention pertains to decoding audio on a limited mobile device.
  • Mobile devices present many challenges for audio playback.
  • Typical mobile processors have integer math only, little or no on-CPU cache, and a limited audio output interface.
  • the present invention takes several approaches to getting maximum audio quality out of these very limited devices, which are applicable to other audio codecs such as AC3, AMR and WMA v9, as well as AAC LC. These include a novel use of Huffman codebooks, a highly optimized IMDCT process, and innovative windowing optimizations.
  • the invention pertains to the implementation of a moble handset MPEG4 video and AAC audio player that is hardware-independent and operating system independent, and can simply be downloaded prior to media playback on mobile clients that do not have embedded media players.
  • FIG. 1 is a general diagram for a distributed network system for multimedia-on-deniand, utilizing a centralized content server, indexing host, multimedia object creator and transcoder for live broadcast applications or to transcode and create multimedia objects from archived multimedia files, and distributed content servers involving high capacity cellular network proxy servers and mobile clients running downloaded Java applets or embedded or downloaded non-java multimedia object players; and
  • FIG. 2 is a flow diagram illustrating a multimedia object identification method by the multimedia object creator of FIG. 1 for mobile clients by the host content server of FIG. l; and
  • FIG. 3 illustrates a multimedia object windowing sequence for a live transmission of multimedia objects created by the multimedia object creator of FIG. 1; and FIG. 3a illustrates multimedia object creation for single stream multimedia, just audio and multi-stream MPEG4 composite layers by the multimedia object creator of FIG. 1; and
  • FIG. 4 is a flow diagram illustrating the steps of multimedia object processing by the multimedia object players of FIG. 1;
  • FIG.5 is a diagram illustrating the architecture and processing interaction for the large scale distribution of live and archived multimedia content in the distributed network being managed by the indexing host of FIG. 1, involving remote ranscoding/multimedia object creating servers and a central indexing host server; and
  • FIG. 6 is general diagram illustrating standard MPEG4 Simple Profile decoding steps which are followed in general for video decoding by the multimedia object players of
  • FIG. 1 The first figure.
  • FIG. 7 is a flow diagram illustrating an optimized Huffman codebook method for digital video decoding method used by the multimedia object players of FIG. 1;
  • FIG. 8 is a flow diagram illustrating a method of using a texture buffer to process P- frames for digital video decoding.
  • FIG. 9 is a flow diagram showing a method of video decoding performing faster motion compensation without bilinear interpolation when less quality but faster processing is required that is used by the multimedia object players of FIG. 1;
  • FIG. 10 is a flow diagram illustrating an optimized digital video decoding method for optimizations in pixel processing and dequantization used by the multimedia object players of FIG. 1;
  • FIG. 11 is a flow diagram illustrating a novel use of Chen's algorithm used by the multimedia object players of FIG. 1;
  • FIG. 12 is a flow diagram showing a novel handling YUV to RGB conversion used by the multimedia object players of FIG. 1;
  • FIG. 13 is a flow diagram illustrating decoding short cuts for effective video decoding on variable limited mobile client hardware used by the multimedia object players of FIG. 1;
  • FIG. 14 is a general diagram illustrating basic steps of the AAC digital audio decoding and other similar audio codec decoding, which are followed in general by the multimedia object players of FIG. l;
  • FIG. 15 is a flow diagram illustrating an optimized Huffman codebook method for digital audio decoding used by the multimedia object players of FIG. 1;
  • FIG. 16 is a flow diagram illustrating an optimized digital audio decoding method for optimizations in the IMDCT step used by the multimedia object players of FIG. 1;
  • FIG. 17 illustrates simplified input short-cut processes specific to AAC Low Complexity (LC) audio decoding profile used by the multimedia object players of FIG. 1;
  • FIG 18 shows audio decoding using an alternative bit-operation based Taylor computation method used by the multimedia object players of FIG. 1;
  • FIG. 19 illustrates further IMDCT short window processing for digital audio decoding for the method used by the multimedia object players of FIG. 1;
  • FIG. 20 illustrates low energy gap timing in audio playback for the method of audio decoding used by the multimedia object players of FIG. 1.
  • Fig. 1 illustrates a centralized content server system 1, utilizing a transcoder 2 and a multimedia object creator 3 to create multimedia objects from a live broadcast 4 or to transcode and create multimedia objects from archived multimedia files 5.
  • the central server includes an indexing host system 6 to deploy created multimedia objects to relevant content servers 7 through the wired Internet and to verify all geographically dispersed wireless clients 8.
  • the system includes the potential use of proxy cellular network http servers 9, which can cache large numbers of small multimedia objects to support large numbers of concurrent wireless clients 8 running multimedia object Java applets 10 or embedded or downloaded non-java multimedia players 11.
  • FIG. 2 is a flow diagram illustrating the process of multimedia object identification by the multimedia object creator 3. This process encodes a Supplied Identification to each multimedia object to identify the transport protocol, source host, path and number of objects of a particular multimedia stream.
  • the host directory name of the multimedia objects is formatted to contain the number of video objects located within the directory.
  • An delimiting character is placed between the end of the directory name and the number indicating the multimedia object count. This allows the use of directory names terminating in numbers while indicating an unambiguous multimedia object count e.g. StarWars_l.mp4, StarWars_2.mp4, etc.
  • Multimedia objects within the directory are named similarly to directory name. However, instead of the multimedia count following the delimiting character, a number indicating the multimedia object's position within the sequence of multimedia object is specified. The following is an example:
  • Host content servers 1 or 7 serving the multimedia objects directly to mobile clients 8
  • ⁇ MOSName> Name of the multimedia object sequence (perhaps the name of a broadcast)
  • ⁇ MOCount> Number of multimedia objects (Integer greater than zero)
  • ⁇ MOSeqNum> The multimedia object's sequence number (Integer greater then zero, less then or equal to MOCount)
  • ⁇ Delim> The delimiting character
  • ⁇ MOType> enocoding type e.g. mp4
  • the first multimedia object in the sequence to be played could have the index 0 of a counting series.
  • a window of multimedia objects is made available on all content servers 1 and 7. This window would be comprised of a number of recently created multimedia objects transcoded from the live stream.
  • the window of multimedia objects allows clients to begin reception of a multimedia object sequence at an earlier point than the most recently created multimedia object. This mechanism provides an extra degree of forgiveness in high-latency situations, where there may be a delay between the client 8 discovering the most recent multimedia object and the actual request
  • the window of multimedia objects would shift as more multimedia objects are transmitted from the live source.
  • the multimedia object sequences would begin at 0 and be numbered sequentially.
  • the window size hence permits the removal of earlier objects.
  • a live stream may be made comprised of a window of four objects.
  • the first multimedia object Upon transmission of a fifth video object, the first multimedia object would be deleted, resulting in the following sequence illustrated in FIG. 3.
  • the wireless client 8 can have the capability to search forward in the multimedia object sequence among the multimedia video objects in window. This provides additional transmission continuity in cases where it is not possible to maintain sufficient bandwidth for all multimedia objects in the live sequence.
  • a larger multimedia object window can be used .
  • the mobile client 8 may also store more than two multimedia objects in the internal buffer.
  • wireless networks over which the limited devices operate often have a very high latency. This is especially evident when TCP's 3-way handshake must be performed for every connection that is made. It is therefore ideal to use an application protocol that is able to minimize the latency between each request for a multimedia object.
  • HTTP The transport mechanism for multimedia objects is assumed to be HTTP for the purposes of system FIG. 1, however, other protocols which access content through file and directory structures could be used.
  • FTP, IMAP4 and NNTP all have the capability to serve files in a directory structure.
  • HTTP 1.1 protocol and some implementations of HTTP 1.0 allow the use of a persistent connection over TCP to perform multiple requests.
  • HTTP 1.1 implementations allow the use of pipelined connections, allowing the HTTP client to perform many requests in rapid succession decreasing the latency between the request and reception of each multimedia object.
  • the transcoder 2 and multimedia object creator 3 create multimedia objects of an optimal digital encoding such as MPEG4/AAC from analog multimedia or an alternative codec stream 12, such as MPEGl, MPEG2, MOV, AVI, WMV, ASF, and higher encoded MPEG4.
  • an optimal digital encoding such as MPEG4/AAC from analog multimedia or an alternative codec stream 12, such as MPEGl, MPEG2, MOV, AVI, WMV, ASF, and higher encoded MPEG4.
  • the input stream is transcoded into MPEG4 and AAC and then it is split according to a specified interval, such as 10 seconds, into multimedia objects.
  • the video component of the stream is scanned after the specified interval for the next I-frame 13, where the split is made. Since typically there are no predicted frames in digitized audio 15, a conditional split is made to correspond to the video segmentation 14.
  • FIG. 4 is a flow diagram illustrating client side processing of multimedia objects.
  • Multimedia object player 10 or 11 processing is initiated by the receipt of the first multimedia object from a content server 1 or 7.
  • the first multimedia object's Identification is parsed and the total number of multimedia objects stored within the Indentication's ⁇ path> is determined, or the case of live transmission applications, the number of multimedia objects in window.
  • heap memory allocations for the multimedia objects and meta-data can then be determined. These allocations are created of sufficient size that multimedia objects that follow can verwrite older multimedia objects in the same memory allocation without overflowing.
  • This information includes the Integer values: nMObjectPlaying - current Multimedia Object index playing nMObjectRecving - current Multimedia Object index being received and Boolean value(s): bWaitForBuffer - Indicates to the Playback component that it should way until buffering of further multimedia objects is complete
  • This state information provides a mechanism with which the reception and playback of multimedia objects can be synchronized.
  • the multimedia object contains information required to properly configure the audio and video decoders, and this information is passed to the respective decoder.
  • the object player may choose to either delay playback until the multimedia object buffers in memory have filled or may begin playback immediately while requesting the next multimedia object concurrently. This decision can be based on the speed at which the multimedia objects are retrieved versus the playback time of each multimedia object, the latency of requests for multimedia objects or the number of multimedia objects that can be stored in memory at once.
  • the decoder can perform both audio and video decoding in a single thread.
  • the state information described also provides a mechanism which can be used to skip backwards and forwards through the multimedia object sequence. By changing the state information and restarting the retrieval of multimedia objects, the playback of the objects can be repositioned to any multimedia object in the sequence.
  • FIG. 5 a large scale "live content” application is illustrated.
  • a central server indexing host 17 manages all of the available content and the content servers 7 through which the content is made available.
  • Remote transcoding and multimedia object creating servers 18 that provide continuously updated content must register this content with the indexing host 18.
  • the transcoding servers 18 must also keep the central indexing server
  • Content servers 7 accept and store live content being transmitted from transcoding servers 18. They can also store non-live archive multimedia content, but in a live content type application, they need only cache the most current window of multimedia objects.
  • Content servers 7 are distributed in such a fashion that allows wireless clients 8 a fast and low latency host connection. Content servers 7 could all be connected in a LAN, but for large scale operations, they could have any distribution on the wired Internet.
  • the wireless client 8 can receive the content directly 19 from a content server 7 or indirectly
  • the central indexing host 17 accepts requests from clients 8 for multimedia content 21.
  • the indexing host 17 must reply with the most suitable content server 7 for the client 8.
  • the central indexing host 17 also authenticates 22 clients 8 as they request available content and specific pieces of content.
  • the authentication process is designed in such a way that the content servers 7 do not need to maintain a list of authorized clients 8 and the content available to them. Instead the indexing host 17 authenticates the client 8 and provides the client 8 with an encrypted string that is eventually decrypted by the content server 7.
  • This string is the encrypted form of the catenation of the content name or description, and the current UTC date-time and an interval of time for which the client 8 is authorized to access the multimedia content.
  • the string is designed to allow the client 8 to access and playback multimedia objects received from a designated content server 7.
  • the indexing host 17 may also provide the client 8 with other information about the multimedia content, along with the encryption string, such as a description of the source, copyrights, and subtitle data-sources.
  • FIG. 6 illustrates MPEG4 video decoding process as outlined by the MPEG-4 Committee for recovering video object planes (VOPs) data from the coded bit stream. These steps of video decoding are followed in general by the video decoding process of multimedia players 10 and 11.
  • the decoding process is composed of three major sections: shape, motion, and texture decoding.
  • Coded Bit Streams The video stream is parsed and demultiplexed to obtain shape, motion, and texture bit streams. Each stream has a decoding process needed in order to reconstruct the VOPs.
  • Shape Decoding Binary shape decoding is based on a block-based representation.
  • the primary coding methods are block-based context-based binary arithmetic decoding and block-based motion compensation.
  • Variable Length Decoding Shape information, motion vectors, and the quantized DCT coefficients are encoded using variable length codes. Differential DC coefficients in intra macroblocks are encoded as variable length codes. The final DC value is the sum of the differential DC value and the predicted value. The AC coefficients and non-intra block DC coefficients use a different variable length code.
  • Inverse Scan Coefficients are scanned during the encoding process for two reason - to allocate more bits to high energy DCT coefficients during quantization and to turn the two dimensional array (8x8) into a one dimensional array.
  • the reverse process i.e. inverse scan
  • the decoding size is used to ensure proper dequantization and to restore the two dimensional information.
  • Inverse AC and DC Prediction The prediction process is only carried out for intra macro blocks. Previous intra macro blocks are used for forward prediction in order produce subsequent macro blocks. This optimization process is used to predict both DC and AC coefficients.
  • Inverse Quantization The two-dimensional array of coefficients produced by the inverse scan is inverse quantized to produce the reconstructed DCT coefficients.
  • the process is trivial; it is basically a multiplication by the quantizer step size.
  • a variable quantizer step size can be produced by using a weighted matrix or a scale factor in order to variably allocate bits during the encoding/decoding process.
  • ICT Inverse DCT
  • Motion compensation is another technique used to achieve high compression.
  • the algorithm used by MPEG-4 is block-based motion compensation to reduce the temporal redundancy between VOPs.
  • Motion compensation in this case is two fold: if is used to predict current VOP from previous VOP, and to interpolate prediction from past and future VOPs in order to predict bidirectional VOPs.
  • Motion vectors must be decoded to predict movement of shapes and macroblocks from one VOP to the next Motion vectors are defined for 8x8 or 16x16 regions of a VOP.
  • VOP Video Object Planes
  • Video streams must begin with a frame that makes no temporal reference to any earlier frames or an Intra-Frame (I-Frame).
  • I-Frame Intra-Frame
  • a second type of VOP that allows temporal reference to the previous frame in the stream are known as Predicted Frames (P- Frames).
  • Macroblocks within P-Frames may contain motion vectors to enable motion correction from the previous frame. These macroblocks often contain pixel residue information which includes corrections to the predicted pixels. Motion compensation must occur for many of the macroblocks within P-Frames and is a critical component of any video decoding mechanism.
  • Motion vectors can be compressed using Huffman codes. These are binary Variable Length Codes (VLC) which represent values occurring with high probability with shorter binary length than values which occur with less probability.
  • VLC binary Variable Length Codes
  • the rapid decoding of VLCs is critical to any decoding application on constrained devices.
  • Theoretical Huffman codebook process reads bits from the packet bitstream until the accumulated bits match a codeword in the codebook. This process can be thought of as logically walking the Huffman decode tree by reading one bit at a time from the bitstream, and using the bit as a decision Boolean to take the 0 branch (left side) or the 1 branch (right side). Walking this binary tree finishes when the decoding process hits a leaf in the decision tree - the result is the entry number corresponding to that leaf. Reading past the end of a packet propagates the 'end-of-stream' condition to a decoder.
  • the novel approach taken to decode VLCs by the video decoding process operating on the multimedia object players 10 and 11, is illustrated in FIG. 7, and can be precisely described as follows:
  • N an integer buffer
  • the number of bits read is equivalent to the length of the longest code in the VLC codebook.
  • the roof of logarithm (base 2) of N is taken. Based on the result, N is shifted and used as an index into an array containing the true value indicated in the codebook and the true length of the code.
  • the number of bits indicated as the true length is then removed from the video stream and processing continues.
  • the maximum length of a code in Table B-7 is 9.
  • the roof of logarithm (base 2) of N is found to be the value 4 is then used to identify the array in which N is used as an index to locate the appropriate decoded value. N can also be shifted to remove irrelevant bits allowing the lookup array to be smaller.
  • FIG. 8 is a flow diagram, describing video decoding process in the multimedia object players 10 and 11, which illustrates that a texture buffer large enough to contain 4 luminance and 2 chrominance blocks (the dimensions of a macroblock exemplified in the MPEG4 specification) is used to store the predicted pixels from a reference frame.
  • This texture buffer is much smaller then the original video frame and decreases the amount of reading from and writing to non-consecutive bytes within the reference and output video frames.
  • Ah * pixel residues are applied to the texture buffer which is then copied to the output frame.
  • This method of processing P-frames is optimal in situations where the main processing unit has sufficient cache to store the texture information of the entire Macroblock. In cases where the limited device has very little or no on-die cache, it may be preferable to avoid using a macroblock texture buffer. Also, macroblocks with motion vector information contain pixel residue values that are often distributed in a much smaller range of values than the pixels of a texture. In cases where the device is unable to decode the video stream in real-time, a faster but less accurate HOCT algorithm can be used to process these residue values. Furthermore, to minimize the effect of the less accurate IDCT algorithm, this step is taken first on chrominance pixel residues, but can also occur for luminance pixel residues as required.
  • the motion vector information associated with a macroblock often references a point between pixels on the reference VOP. This requires that decoders perform bilinear interpolation between pixels. This is a time consuming process requiring the sampling of four source pixels, four additions and a single divide operation for every output pixel.
  • the video decoding process of the multimedia object players 10 and 11, shown in the flow diagram of FIG. 9, uses faster motion compensation without bilinear interpolation when less quality but faster processing is required.
  • Digital video codecs define Luminance and Chrominance values within a given subrange of values, MPEG4 uses [0, 255]. This allows the decoding software to store the Luminance and Chrominace pixels within a single byte of data with the correct precision.
  • values outside the [0,255] are often generated during motion compensation and in the inverse DCT steps. Attempting to store values outside this range results in a single byte overflows causing graphical errors in the final video output. Clipping these values and modifications to the dequantization process can be very time consuming and can result in decrease of output correctness.
  • FIG. 10 is a flow diagram illustrating novel optimization for the dequantization step of digital video decoding in the multimedia object players 10 and 11.
  • the novel optimization requires the a reduction in pixel accuracy but allows values outside the range [0, 255] to be represented in a byte field without an overflow.
  • the range [-128, 383] is sufficient to store nearly all potential resulting Luminance and Chrominance pixel values.
  • 383] may be represented in the [0, 255] with a decrease in accuracy of 50%. This decrease in luminance and chrominance accuracy is not a factor on many limited devices as the RGB color resolution is often in the 4-bit to 18-bit range.
  • FIG. 11 is a flow diagram illustrating a novel use of Chen's algorithm in the multimedia object players 10 and 11.
  • IDCT are populated with values. This same mechanism can be used to ignore certain input values of insufficient energy to make a very noticeable impact on the output image and further decrease processing time.
  • FIG. 12 is a flow diagram showing video decoding of the YUV to RGB step in the in the multimedia object players 10 and 11, as follows:
  • a minimum amount of reading from the source Luminance and Chrominance planes is desired. This is accomplished by iterating through pixels in the source plane. A fixed number of Luminance and Chrominance values in a column are read and the resulting RGB values computed for each pixel position. The pixel values are then copied in a looping fashion first by column, then by row to the output plane. This provides a way to read a single input value which may result in many output values in the output plane when scaling up.
  • Luminance and Chrominance planes are desired. This is accomplished by iterating through pixel positions in the output plane and calculating the source pixel in the input plane. This provides a way to read a single input value for every output value and minimizes the number of input-plain reads that are necessary.
  • the YUV to RGB conversion step is such a time consuming one that methods of improving the speed of computation at the expense of output quality have been implemented. Improvements in speed can be obtained by sampling only a subset of the chrominance pixels, avoiding pixel clipping or calculating the Red and Blue values for only a subset of output pixels. All of these methods are used together to provide several quality levels in the YUV to RGB step.
  • FIG. 13 is a flow diagram summarizing the short-cut optimization processing by the video decoding process used in the multimedia objects players 10 and 11. State information is maintained about the quality levels with which the current video stream is processed.
  • short-cuts in the decoding process must be made to allow the device to maintain synchronicity between the audio and video playback.
  • These short-cuts are specifically designed to drastically reduce the number of computations necessary at certain critical steps in the video decoding process at the cost video output quality.
  • This mechanism allows video decoding to scale in complexity based on the processing power of the device being used. It has been found that three quality levels tests at each critical step appear to yield the best results. The highest quality is consistent with the video codec specification and displays a correct image.
  • a medium quality level indicates that certain time consuming short-cuts are made with some impact on image quality.
  • a low quality level indicates that drastic reductions in display quality are made to improve processing time - the output video can be unrecognizable at times and as a result this level is used only in drastic cases of a sudden drop in processor availability.
  • the state information is composed of a series of integers correspond to various steps in the decoding process and define the quality at which the decoder should perform several steps.
  • the implemented system in the multimedia players 10 and 11 consists of six of these integers:
  • nFrameRateQuality Defines the allowance to drop frames (from a single P-Frame occuring before an I-Frame up to dropping all P- Frames)
  • n VideoQuality In addition to the set of integers defining the actual quality at various steps, a single integer representing the current quality level of the overall decoding is used (named n VideoQuality in this instance). Each step quality has a very limited number of possibilities (HIGH, MEDIUM, LOW, etc), however, nVideoQuality can take on many values. At each value of nVideoQuality, a ruleset defines quality of each of the above step qualities. At the highest value of nVideoQuality, all step qualities are set to maximum. As an nVideoQuality is decreased, the step qualities are incrementally reduced according to the ruleset.
  • Some states of quality levels are less preferable to others. For example, it is not preferable to render many frames at the lowest quality setting of nLuminalDCTQuality - it is instead more preferable to drop frames if there is insufficient processing capability to perform nLumalDCTQuality at a higher quality.
  • the ruleset is designed to take these possibilities into account.
  • FIG. 14 illustrates the general steps of audio decoding followed by the audio decoding process of the multimedia object players 10 and 11.
  • the first step in AAC audio decoding (bit-stream de-multiplexing), which is common to other digital codecs, is to establish frame alignment. This involves finding the AAC sync word and confirming that the AAC frame does not contain any errors, if error checking is enabled in the frame. Once the frame sync is found, the bitstream is de-multiplexed or unpacked. This includes unpacking of the Huffman decoded and quantized scale factors, the M/S synthesis side information, the intensity stereo side information, the TNS coefficients, the filter bank side information and the gain control words.
  • VLCs variable length fields
  • Bits are read off the stream into an integer N.
  • the number of bits read is equivalent to the maximum number of bits in the longest codeword in the codebook.
  • the first binary 0 is then located starting from the highest bit.
  • the left-based index of this first 0 is then used to remove out all the previous Is and N is shifted and used as an array Index.
  • the AAC standard's 2nd Codebook contains the Code/Value pair:
  • the ZeroPosition of the above integer is found to be 4.
  • ZeroPosition is then used to mask off the 1 bits previous to it yielding the integer "010 IX". This can then be used as an index to an array or be shifted to remove the irrelevant bits allowing the lookup array to be smaller.
  • M/S synthesis conditionally dematrixes two channels into a stereo pair. The samples my already represent the left and right signals, in which case no computation is necessary. Otherwise the pair must be de-matrixed via one add and one subtract per sample pair in order to retrieve the proper channel coefficients.
  • Intensity stereo identifies regions in a channel pair that are similar, except for their position.
  • Left-channel intensity regions must have inverse quantization and scaling applied.
  • Right-channel intensity stereo regions use the left-channel inverse quantized and scaled coefficients, which must be re-scaled by the intensity position factors.
  • the next standard step, temporal noise shaping (TNS) has a variable load, depending on the number of spectral coefficients that are filtered.
  • IMDCT Inverse Modified Discrete Cosine Transform
  • FIG. 16 illustrates Intermediate 23 and Final 24 optimizations for the digital audio
  • IMDCT step used by the audio decoding process in the multimedia object players 10 and 11.
  • the audio decoder of the multimedia object players 10 and 11 combines the use of a specific Inverse Fast-Fourier Transform with Pre- and Post- processing steps.
  • This method produces a simplified IMDCT algorithm with O(n * log(n)) runtime.
  • This method can also incorporate the use of various IFFT algorithms based on the sparseness of input.
  • the IMDCT algorithm accepts an input array X of spectral coefficients in the frequency domain and outputs an array of amplitude values in the time domain twice the size of the input.
  • the implementation of the AAC Low Complexity codec requires that the IMDCT algorithm accept input array lengths of 128 or 1024 Real values and results in an output of 256 or 2048 Real values.
  • N refers to the size of the output (256 or 2048)
  • Im(X) returns the imaginary component of some variable X
  • Re(X) returns the real component.
  • Im(Z) SQRT(2/N) * sin(2*Pi*(n + 1/8) / N)
  • IFFT Inverse Fast Fourier
  • a fixed-point IFFT algorithm is used to allow processing of IMDCT on devices which lack floating point capabilities. Most mobile devices do not allow floating point computations, and of those that do allow floating point, it is usually too slow. Due to the properties of the inverse Fourier transformation, the transformation can be calculated in a fixed point manner. In a fixed point transformation, the input is be scaled by multiplying the input values by a scale factor and then the correct output is found by multiplying by the reciprocal of the scale factor. Therefore a scaling operation is applied before and after the IFFT. A scale factor which is a power of two is chosen ' so that the scaling and re-scaling operations can be accomplished by bit shift operations. Bit shifts are among the fastest operations for CPUs.
  • Re-order, pre-scale and twiddle The method loops over the input data, and each datum is complex-multiplied by the twiddle factor, and is then re-scaled by doing a bit shift operation. However, the twiddle factor is already bit-shifted so it can be treated as a fixed-point number, so the scaling operation's bit shift is partially performed by the twiddle factor itself.
  • the relevant twiddle factors are stored in an array table. Once the complex multiplication and scaling are done, the resulting values are stored in the reordered location in the IFFT input array.
  • Re-scale, re-order, post-twiddle, window and overlap Combining these four operations into one step replaces four array accesses with one, and some of the multiplications are also combined into single bit shifts.
  • This method loops over the IFFT output array, and performs four operations in each iteration of the loop: the post-twiddle and rescale are combined, because the post-twiddle uses a twiddle factor table which is already bit- shifted. Windowing is combined in this step also, with window values coming from either a table or a fast integer sine calculator. Finally, values are overlapped and stored in the correct location in the output array.
  • FIG. 17 illustrates simplified input shortcut processes that are specific to AAC Low Complexity (LC) profile which are used in the audio decoding process of multimedia players 10 and 11.
  • LC Low Complexity
  • the Mid/Side, Intensity and Temporal Noise Shaping steps marked with cross hatches above, are optional.
  • audio decoding can further combine other steps in a novel way. These steps are marked in grey in FIG. 17. If these other steps are combined, there are no dependencies within a frame until we reach the IFFT step within IMDCT itself. Therefore, operations between noiseless decoding and the pre-IFFT operations within IMDCT itself are combined, minimizing memory access.
  • IMDCT has four different window shapes which are common in other digital audio codecs: long only, long start, long stop, and eight short. Qf these four window sequences, only one (long only) has non-zero data in the entire output synthesis window. In the case of AAC, however, the output synthesis window always has 2048 output values.
  • the calculations can be short-cut, avoiding the post-twiddle, windowing, re-ordering, scaling and overlapping steps entirely.
  • IMDCT permits two different window types: Kaiser-Bessel Derived (KBD) windows and Sine windows.
  • KBD uses a complicated formula which cannot be computed in realtime, and is always used as a table.
  • Sine windows are also used from tables in most implementations. However, on a mobile device, which generally has a very small on-CPU memory cache, frequent accesses to a sine window value table will cause cache misses and degraded performance.
  • the FIG 18 shows the audio decoder of the multimedia objects players 10 and 11, using a bit-operation based Taylor computation, as follows:
  • Step 3 The bit shift operations in Step 3 can be further combined with other fixed-point multiplication steps.
  • FIG. 19 is illustrates further IMDCT short window processing for even greater efficiency by the audio decoding process of multimedia players 10 and 11.
  • the input of 1024 values is divided into eight short windows of 128 values, and IMDCT, windowing and overlapping is performed on each of these short windows.
  • Each window of 128 values results in a synthesis output window of 256 values. These are then overlapped, resulting in non-zero values in the range of 448 to 1600.
  • the approach taken is to do every one of the IMDCT operations in sequence, rather than in parallel, storing the IMDCT results directly into the regions of the output array which will be zeroed. The output values are then windowed and overlapped. After all the eight short windows are completed, the regions of the synthesis output window which are always zero can be disregarded, due to the window shape shortcut method described above.
  • FIG. 20 illustrates an interleaved detection process in the audio decoding of received multimedia objects 25 by the multimedia object players 10 and 11.
  • FIG. 20 illustrates the placement of gaps 26 at detected frames of low total energy 27 are they are detected during audio decoding by the multimedia object players 10 and 11. Hence, playback is then controlled so that the gap will occur during that frame, which may be dropped, so that synchronization with video is not lost.
  • the multimedia object player 10 is a downloadable Java (J2M2) applet and the described audio and video decoder optimizations and strategies, FIG. 7-13 and FIG. 15- 20, as applied to standard MPEG4 and AAC decoding make it possible for the multimedia object player 10 to playback live music and video, at acceptable frame rates (5-15fps), on limited, cell phone handsets.
  • Java cannot take advantage of hardware capabilities in processing the huge number of calculations and variables required for either MPEG4 decoding or AAC decoding.
  • the required optimizations for multimedia player 10 to permit the playback of AAC and MPEG4 on current mobile client hardware, are a source of technological innovation and advance.
  • the foregoing is intended, along with the drawings, to illustrate the preferred embodiment of the invention. Those skilled in the art will be able to devise numerous arrangements which, although not explicitly shown or described herein, embody the principles of the invention and are within their spirit and scope as defined by the following claims.

Abstract

A method for multimedia playback and transmission to wireless clients is described. A host webserver transcodes a live digital or analog audio-visual or audio broadcast signal and splits the input stream into small multimedia objects of an efficient compression such as MPEG4/AAC, and then immediately deploys the objects to distributed content servers for a geographically dispersed population of wireless clients. A java applet object player, downloaded to wireless clients at the beginning of the multimedia on-demand session, interprets and decodes the multimedia objects as they are received, using multiple levels of optimization. The applet uses novel video and audio decoding optimizations which can be generically applied to many digital video and audio codecs, and specifically decodes Simple Profile MPEG4 video and Low Complexity AAC audio.

Description

MULTIMEDIA SYSTEM FOR MOBILE CLIENT PLATFΌRMS
FIELD OF I>ΓVENΗON
The invention relates to issues of the wireless Internet, specifically to methods of multimedia transmission and playback for mobile clients.
BACKGROUND OF THE INVENTION
The current methods of distributing multimedia data over the wireless Internet to mobile clients are constrained by existing wireless bandwidth, and the real-time decoding, processing and displaying of multimedia content with limited hardware capabilities. These hardware limitations include slow CPUs, high memory latencies, slow drawing capabilities and the absence of YUV to RGB conversion in the hardware.
Video and audio playback exist on certain cell phone handsets, but this technology is embedded and takes advantage of low-level hardware processing to enable the performance required for media playback. Through embedded media players, several cell phone handsets and handheld computers can play either streamed video or audio. Popular digital video encoding standards for some handsets are H263 and MPEG4. The audio codecs, MP3, AMR and AAC, are also typically supported on some mobile handhelds. Newer video codecs, like H264 could be used for video transmission to cell phones, but would require clients systems with fast memory access for their motion compensation methods.
Embedded streaming media players rely on firmware integration to take advantage of the multitasking capabilities of cell phone handsets. At the time of this writing, most cell phones cannot support multimedia playback because they are only capable of supporting one or a few concurrent processing threads. On handsets that have embedded media players, video is also limited to very low frame rates and the bandwidth available for streaming in North America is low, varying from 2-3kybtes/second to ISDN speeds of 64Kbits. European countries and Japan currently offer 3G network connection speeds, varying from 64kbits-300Kbits, and offer more technologically advanced cell phones with embedded media players that can achieve higher video frame rates. For limited per usage periods and some, EV-DO (Evolution Data Optimized) networks can also provide these higher speeds over local CDMA networks.
Decoders for complex video codecs which support highly scalable MPEG4 video, and more complex, CD quality music, audio codecs like AAC, require multiple parallel processes and fast processing. Mathematical algorithms designed for the high number of floating point samples for higher end MPEG4 and AAC, require a sample rate approximately 36,000 floating point calculations/second, are intended to run on specialized chips. Even at lower and very low bitrates, where MPEG4 is more efficient than its predecessors, MPEG4 software players depend on PC multitasking or hardware APIs for efficient processing to draw video frames.
Currently, device independent Java (J2ME) offers two standard configurations on mobile clients. The Connected Limited Device Configuration (CLDC) is prevalent in the J2ME world, and powers cellular phones, pagers, PDAs, and other handheld devices. A variant of CLDC, Connected Device Configuration (CDC) targets more powerful devices, such as home appliances, set-top boxes, and Internet TVs.
The second configuration, Mobile Information Device Profile (MIDP), runs on top of the CLDC, and several profiles run on top of CDC. Java players for cell phones like the Oplayo MVQ player exist, but implementations of true, platform independent, MPEG4 Java decoders that will play video on cell phones are not known. More efficient methods, such as U.S. Pat. No. 5,699,121, do not rely on DCT motion compensation, and propose pattern marching to identify regions in the motion residual signal that have not been accurately reproduced and to correct them using a pattern library. But again, this approach does not use MPEG4 video encoding.
Similarly, although Java decoders exist that play MP3 ringtones on cell phones, no Java players are known that will play AAC. In fact, many of the newer IDCT algorithms are targeted more towards customized logic chips that only do IDCT (composed of many simple pipelined instructions as opposed to a few more complex ones). Bit streaming is the standard method of transmitting audio or video to cell phones over wireless networks. Streamed bits are buffered and then decoded, or entire video files are downloaded or proportionately cached, or as in progressive, http streaming downloading to a point where complete, continuous playback is deemed possible. In the case of wireless networks, bit streaming in terms of audio/video content is usually is done over a non-reliable transport like UDP and requires a lot of error correction and duplication of content (extra stream correction data). Alternatives to streaming methods have been proposed for the transmission of video as objects through pre-fetched lists Waese et al., US Pat. No. 6,286,03 l.which are similar to downloading pre-fetched lists of SMIL objects, and instant or scheduled notification file downloading Stumm, US Pat. No. 5,768,528. However, these do not address specific continuity and deployment issues for wireless multimedia transmission and concurrent playback on a limited-tasking cell phone handsets.
Streaming also requires client processing for significant error correction in video decoding, adaptive encoding for varying channel bitrates and in cellular networks. It also requires the use of cellular MMS multimedia protocol.
The object of the current invention is to solve one or more of the drawbacks in existing methods discussed above, or to provide other improvements to the art.
SUMMARY OF THE INVENTION
The invention relates to wireless Internet multimedia transmission and wireless clients. In a first aspect the invention provides a method of efficient multimedia object creation. In a second aspect the invention deployment addresses the limitations of large-scale multimedia transmission on cellular networks to wireless clients. In a third aspect, the invention relates to methods of decoding of video, sufficiently optimized to be played on a limited wireless client. In a fourth aspect, the invention relates to methods of decoding of audio, sufficiently optimized to be played on a limited wireless client In a fifth aspect, the limited mobile handset multimedia object player, for both MPEG4 video decoding and AAC audio decoding, is implemented as a device- independent Java (J2ME) applet.
In its first aspect, the invention pertains to efficiently transmittable multimedia object creation. A server-based transcoder, coupled with a multimedia object creator, inputs a standard analog signal or alternative digital signal like MPEG2, and converts this signal into true MPEG4/AAC multimedia objects. As multimedia objects they can then be dynamically uploaded to multiple live content hosting web servers, which, through proximate mobile network proxy servers, make live content accessible to mobile clients as consecutive multimedia objects.
The multimedia object creator produces discrete multimedia objects from video and audio segments of a continuous stream. If the stream is MPEG4, multimedia objects can also be segments of multiple component video and audio streams. In the case of multiple MPEG4 component streams, per object segmentation and decoding can enable the composition of a single scene from several temporally-independent multimedia objects. This provides the possibility of decoding only a limited number of multimedia objects, and not all objects, to provide an object-based scalability.
Multimedia objects are discrete, and also have distinctive Internet addresses, and hence, the mobile client will have the opportunity to interact with any given media sequence on a per object basis. In cases where the multimedia object sequence is being transcoded from a live stream, a window of multimedia objects is made available on the host server This window would be comprised of a number of recently created multimedia objects. To minimize delays that will occur to maintain synchronicity between the client and server, a larger multimedia object window can be dynamically created on the host server. In cases where the media object sequence has been previously transcoded and resides on the host as a non-live source, each object is numerically ordered. The transport mechanism for the multimedia objects is assumed to be HTTP for the purposes of illustration, however, other protocols which access content through file and directory structures could be used. For example FTP, IMAP4 and NNTP all have the capability to serve files in a directory structure.
On the client side, the number of multimedia objects that can be buffered in memory is based on the size of the first multimedia object. The amount of free memory available and the processing of further multimedia objects in the sequence can be optional and dependent on whether the implementation allows the modification of object parameters between multimedia objects (such as the size of the visual frame or the sample rate of the audio stream). The buffering and playback of multimedia objects in a synchronized fashion is critical to fluid playback.
The wireless networks over which the limited devices operate often have a very high latency. The HTTP 1.1 protocol and some implementations of HTTP 1.0 allow the use of a persistent connection over TCP to perform multiple requests. Furthermore, some HTTP 1.1 implementations allow the use of pipelined connections allowing the HTTP client to perform many requests in rapid succession decreasing the latency between the request and reception of each multimedia object. When possible, the invention can take advantage of this capability.
In its second aspect, the invention pertains to multimedia object deployment to large numbers widely distributed wireless Internet clients. Media content converted to multimedia objects must be available to many users and the distribution system must be sufficiently robust to allow peaks in demand and have sufficient geographic proximity that network congestion and latency are reduced.
In this second aspect, and in the case of live content transcoding from a live audio/video stream, the transcoding of input media formats and the creation multimedia objects is done in real-time and immediately deployed to every content server of the distributed system. These content servers may be at the same location, or they may be geographically placed to support local mobile clients and take advantage of alternative mobile network proxy servers and proxy server object caching. The distribution of multimedia objects to alternative content servers can take place on the wired Internet. In a third aspect, the invention provides novel optimizations for digital video decoding. Some of these optimizations can then be used by an expert assessment process, whereby, the decoder maintains a state information list of short-cuts, related to perceived frame rate in a sorted list starting with those that will decrease output quality the least, to those that will decrease output quality the most but have the most impact on decoding speed. The client player-decoder dynamically adjusts how many shortcuts must be taken. These short-cuts are specifically designed to drastically reduce the number of computations necessary at certain critical steps in the video decoding process at the cost video output quality. This allows the video decoder to scale in complexity based on the processing power of the device being used. It also allows users to experience multimedia playback despite the limitations of the device they may be using.
In a fourth aspect, the invention pertains to decoding audio on a limited mobile device. Mobile devices present many challenges for audio playback. Typical mobile processors have integer math only, little or no on-CPU cache, and a limited audio output interface. The present invention takes several approaches to getting maximum audio quality out of these very limited devices, which are applicable to other audio codecs such as AC3, AMR and WMA v9, as well as AAC LC. These include a novel use of Huffman codebooks, a highly optimized IMDCT process, and innovative windowing optimizations.
One of the serious limitations of mobile devices is their inability to play continuous sound. There is no way to play a long sound without gaps which occur when switching from one block of sound to the next block of sound. The proposed invention also solves this gapping problem by intelligent placement of the gaps. A frame of low total energy is selected, and the playback is controlled so that the gap will occur during that frame. The low-energy frame may be dropped so that synchronization is not lost. In a fifth aspect, the invention pertains to the implementation of a moble handset MPEG4 video and AAC audio player that is hardware-independent and operating system independent, and can simply be downloaded prior to media playback on mobile clients that do not have embedded media players. Hardware and operating system independence are characteristics of Java applets, but Java cannot take advantage of hardware capabilities in processing the huge number of calculations and variables required for either MPEG4 decoding or AAC decoding on limited processing mobile handset Hence, the required optimizations for Java itself, to permit the playback of AAC and MPEG4 on current mobile client hardware, are a source of technological innovation and advance.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the accompanying drawings which show the preferred embodiment of the present invention and in which:
FIG. 1 is a general diagram for a distributed network system for multimedia-on-deniand, utilizing a centralized content server, indexing host, multimedia object creator and transcoder for live broadcast applications or to transcode and create multimedia objects from archived multimedia files, and distributed content servers involving high capacity cellular network proxy servers and mobile clients running downloaded Java applets or embedded or downloaded non-java multimedia object players; and FIG. 2 is a flow diagram illustrating a multimedia object identification method by the multimedia object creator of FIG. 1 for mobile clients by the host content server of FIG. l; and
FIG. 3 illustrates a multimedia object windowing sequence for a live transmission of multimedia objects created by the multimedia object creator of FIG. 1; and FIG. 3a illustrates multimedia object creation for single stream multimedia, just audio and multi-stream MPEG4 composite layers by the multimedia object creator of FIG. 1; and
FIG. 4 is a flow diagram illustrating the steps of multimedia object processing by the multimedia object players of FIG. 1; and
FIG.5 is a diagram illustrating the architecture and processing interaction for the large scale distribution of live and archived multimedia content in the distributed network being managed by the indexing host of FIG. 1, involving remote ranscoding/multimedia object creating servers and a central indexing host server; and
FIG. 6 is general diagram illustrating standard MPEG4 Simple Profile decoding steps which are followed in general for video decoding by the multimedia object players of
FIG. 1; and
FIG. 7 is a flow diagram illustrating an optimized Huffman codebook method for digital video decoding method used by the multimedia object players of FIG. 1; and
FIG. 8 is a flow diagram illustrating a method of using a texture buffer to process P- frames for digital video decoding; and
FIG. 9 is a flow diagram showing a method of video decoding performing faster motion compensation without bilinear interpolation when less quality but faster processing is required that is used by the multimedia object players of FIG. 1; and
FIG. 10 is a flow diagram illustrating an optimized digital video decoding method for optimizations in pixel processing and dequantization used by the multimedia object players of FIG. 1; and
FIG. 11 is a flow diagram illustrating a novel use of Chen's algorithm used by the multimedia object players of FIG. 1; and
FIG. 12 is a flow diagram showing a novel handling YUV to RGB conversion used by the multimedia object players of FIG. 1; and
FIG. 13 is a flow diagram illustrating decoding short cuts for effective video decoding on variable limited mobile client hardware used by the multimedia object players of FIG. 1; and
FIG. 14 is a general diagram illustrating basic steps of the AAC digital audio decoding and other similar audio codec decoding, which are followed in general by the multimedia object players of FIG. l; and
FIG. 15 is a flow diagram illustrating an optimized Huffman codebook method for digital audio decoding used by the multimedia object players of FIG. 1; and
FIG. 16 is a flow diagram illustrating an optimized digital audio decoding method for optimizations in the IMDCT step used by the multimedia object players of FIG. 1; and
FIG. 17 illustrates simplified input short-cut processes specific to AAC Low Complexity (LC) audio decoding profile used by the multimedia object players of FIG. 1; and
FIG 18 shows audio decoding using an alternative bit-operation based Taylor computation method used by the multimedia object players of FIG. 1; and
FIG. 19 illustrates further IMDCT short window processing for digital audio decoding for the method used by the multimedia object players of FIG. 1; and
FIG. 20 illustrates low energy gap timing in audio playback for the method of audio decoding used by the multimedia object players of FIG. 1.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Fig. 1 illustrates a centralized content server system 1, utilizing a transcoder 2 and a multimedia object creator 3 to create multimedia objects from a live broadcast 4 or to transcode and create multimedia objects from archived multimedia files 5. The central server includes an indexing host system 6 to deploy created multimedia objects to relevant content servers 7 through the wired Internet and to verify all geographically dispersed wireless clients 8. The system includes the potential use of proxy cellular network http servers 9, which can cache large numbers of small multimedia objects to support large numbers of concurrent wireless clients 8 running multimedia object Java applets 10 or embedded or downloaded non-java multimedia players 11.
FIG. 2 is a flow diagram illustrating the process of multimedia object identification by the multimedia object creator 3. This process encodes a Supplied Identification to each multimedia object to identify the transport protocol, source host, path and number of objects of a particular multimedia stream.
The host directory name of the multimedia objects is formatted to contain the number of video objects located within the directory. An delimiting character is placed between the end of the directory name and the number indicating the multimedia object count. This allows the use of directory names terminating in numbers while indicating an unambiguous multimedia object count e.g. StarWars_l.mp4, StarWars_2.mp4, etc.
Multimedia objects within the directory are named similarly to directory name. However, instead of the multimedia count following the delimiting character, a number indicating the multimedia object's position within the sequence of multimedia object is specified. The following is an example:
Supplied Identification for MultimediaObjects: <transport>://<host>/<path>/<MQSNamexDelimxMOCount>
Computed Identification for each multimedia object based on Supplied Identification:
<SupphedID>/<MOSName>.<MOSeqNumxDelimxMOType> <transport> : protocol used to transmit multimedia objects
<host> : Host content servers 1 or 7 serving the multimedia objects directly to mobile clients 8
<path> : Path to the multimedia object directory
<MOSName> : Name of the multimedia object sequence (perhaps the name of a broadcast)
<MOCount> : Number of multimedia objects (Integer greater than zero)
<MOSeqNum> : The multimedia object's sequence number (Integer greater then zero, less then or equal to MOCount) <Delim> : The delimiting character <MOType> : enocoding type e.g. mp4
When multimedia has been transcoded from a non-live source, the first multimedia object in the sequence to be played could have the index 0 of a counting series. In cases where the multimedia object sequence is being transcoded by the multimedia object creator 3 from a live stream, a window of multimedia objects is made available on all content servers 1 and 7. This window would be comprised of a number of recently created multimedia objects transcoded from the live stream.
The window of multimedia objects allows clients to begin reception of a multimedia object sequence at an earlier point than the most recently created multimedia object. This mechanism provides an extra degree of forgiveness in high-latency situations, where there may be a delay between the client 8 discovering the most recent multimedia object and the actual request
The window of multimedia objects would shift as more multimedia objects are transmitted from the live source. The multimedia object sequences would begin at 0 and be numbered sequentially. The window size hence permits the removal of earlier objects.
For example, a live stream may be made comprised of a window of four objects. Upon transmission of a fifth video object, the first multimedia object would be deleted, resulting in the following sequence illustrated in FIG. 3.
The wireless client 8 can have the capability to search forward in the multimedia object sequence among the multimedia video objects in window. This provides additional transmission continuity in cases where it is not possible to maintain sufficient bandwidth for all multimedia objects in the live sequence.
To reduce delays that will occur to maintain synchronicity between the client 8 and server 1 or 7, a larger multimedia object window can be used . Likewise, the mobile client 8 may also store more than two multimedia objects in the internal buffer. Moreover, wireless networks over which the limited devices operate often have a very high latency. This is especially evident when TCP's 3-way handshake must be performed for every connection that is made. It is therefore ideal to use an application protocol that is able to minimize the latency between each request for a multimedia object.
The transport mechanism for multimedia objects is assumed to be HTTP for the purposes of system FIG. 1, however, other protocols which access content through file and directory structures could be used. For example FTP, IMAP4 and NNTP all have the capability to serve files in a directory structure. HTTP 1.1 protocol and some implementations of HTTP 1.0 allow the use of a persistent connection over TCP to perform multiple requests. Furthermore, some HTTP 1.1 implementations allow the use of pipelined connections, allowing the HTTP client to perform many requests in rapid succession decreasing the latency between the request and reception of each multimedia object.
In FIG. 3a, the transcoder 2 and multimedia object creator 3 create multimedia objects of an optimal digital encoding such as MPEG4/AAC from analog multimedia or an alternative codec stream 12, such as MPEGl, MPEG2, MOV, AVI, WMV, ASF, and higher encoded MPEG4. First, the input stream is transcoded into MPEG4 and AAC and then it is split according to a specified interval, such as 10 seconds, into multimedia objects. The video component of the stream is scanned after the specified interval for the next I-frame 13, where the split is made. Since typically there are no predicted frames in digitized audio 15, a conditional split is made to correspond to the video segmentation 14. In the case of multiple MPEG4 component streams 16, multiple video and/or audio composite layers can also be split into multimedia objects at I-ftames 13. In the case of just audio signal input 12a, the transcoded audio can be analog, or digital codecs, AMR, MP3, RealAudio or higher rate encoded AAC and then split into specified intervals 13a. FIG. 4 is a flow diagram illustrating client side processing of multimedia objects. Multimedia object player 10 or 11 processing is initiated by the receipt of the first multimedia object from a content server 1 or 7. The first multimedia object's Identification is parsed and the total number of multimedia objects stored within the Indentication's <path> is determined, or the case of live transmission applications, the number of multimedia objects in window. Hence, heap memory allocations for the multimedia objects and meta-data can then be determined. These allocations are created of sufficient size that multimedia objects that follow can verwrite older multimedia objects in the same memory allocation without overflowing.
Depending on the amount of heap memory available on the device, several sets of memory allocations can be made to store multiple multimedia objects at once. This constitutes a multimedia object buffer and allows the decoder to be playing one multimedia object while the next has not yet fully completed reception. A device must have enough memory to allow two multimedia objects to be stored in the heap at once, otherwise the behavior is undefined (the decoder may refuse to play the multimedia object sequence, or play it with extended pauses between each multimedia object). Important state information is stored in several variables. This information includes the Integer values: nMObjectPlaying - current Multimedia Object index playing nMObjectRecving - current Multimedia Object index being received and Boolean value(s): bWaitForBuffer - Indicates to the Playback component that it should way until buffering of further multimedia objects is complete This state information provides a mechanism with which the reception and playback of multimedia objects can be synchronized. The multimedia object contains information required to properly configure the audio and video decoders, and this information is passed to the respective decoder.
After configuring the audio and/or video decoding components of the multimedia object player, the object player may choose to either delay playback until the multimedia object buffers in memory have filled or may begin playback immediately while requesting the next multimedia object concurrently. This decision can be based on the speed at which the multimedia objects are retrieved versus the playback time of each multimedia object, the latency of requests for multimedia objects or the number of multimedia objects that can be stored in memory at once.
Following the reception and parsing of the first multimedia, its audio and/or video content are hence parsed and played back. In the audio and/o video content of the first and every subsequent multimedia object, the approach taken is to decode sufficient audio frames that their total duration is as long as the display time of the associated video frame and processing time of the next audio frame. By interleaving the processing between several audio frames and a single video frame, the decoder can perform both audio and video decoding in a single thread.
The retrieval and playback of multimedia objects continues until the last multimedia object in the sequence has been completely retrieved and its playback has finished.
The state information described also provides a mechanism which can be used to skip backwards and forwards through the multimedia object sequence. By changing the state information and restarting the retrieval of multimedia objects, the playback of the objects can be repositioned to any multimedia object in the sequence.
In FIG. 5, a large scale "live content" application is illustrated. A central server indexing host 17 manages all of the available content and the content servers 7 through which the content is made available. Remote transcoding and multimedia object creating servers 18 that provide continuously updated content must register this content with the indexing host 18. The transcoding servers 18 must also keep the central indexing server
17 updated with the latest multimedia object sequence indices, to allow distributed wireless clients 8 to begin playback of any live content with minimal delay.
The URLs of any content servers supporting a particular broadcast would be pre- registered in a table on the indexing server 17.
Content servers 7 accept and store live content being transmitted from transcoding servers 18. They can also store non-live archive multimedia content, but in a live content type application, they need only cache the most current window of multimedia objects.
Content servers 7 are distributed in such a fashion that allows wireless clients 8 a fast and low latency host connection. Content servers 7 could all be connected in a LAN, but for large scale operations, they could have any distribution on the wired Internet. The wireless client 8 can receive the content directly 19 from a content server 7 or indirectly
20 through a cellular network proxy server 9.
The central indexing host 17 accepts requests from clients 8 for multimedia content 21.
The indexing host 17 must reply with the most suitable content server 7 for the client 8.
This can either be done in a round-robin fashion or other factors can be included such as the location of the client 8 relative to available content servers 7. Other information such as the server load and network congestion of each content server 7 can be taken into account
The central indexing host 17 also authenticates 22 clients 8 as they request available content and specific pieces of content. The authentication process is designed in such a way that the content servers 7 do not need to maintain a list of authorized clients 8 and the content available to them. Instead the indexing host 17 authenticates the client 8 and provides the client 8 with an encrypted string that is eventually decrypted by the content server 7. This string is the encrypted form of the catenation of the content name or description, and the current UTC date-time and an interval of time for which the client 8 is authorized to access the multimedia content. The string is designed to allow the client 8 to access and playback multimedia objects received from a designated content server 7.
The indexing host 17 may also provide the client 8 with other information about the multimedia content, along with the encryption string, such as a description of the source, copyrights, and subtitle data-sources.
FIG. 6 illustrates MPEG4 video decoding process as outlined by the MPEG-4 Committee for recovering video object planes (VOPs) data from the coded bit stream. These steps of video decoding are followed in general by the video decoding process of multimedia players 10 and 11. The decoding process is composed of three major sections: shape, motion, and texture decoding.
Coded Bit Streams: The video stream is parsed and demultiplexed to obtain shape, motion, and texture bit streams. Each stream has a decoding process needed in order to reconstruct the VOPs.
Shape Decoding: Binary shape decoding is based on a block-based representation. The primary coding methods are block-based context-based binary arithmetic decoding and block-based motion compensation.
Variable Length Decoding: Shape information, motion vectors, and the quantized DCT coefficients are encoded using variable length codes. Differential DC coefficients in intra macroblocks are encoded as variable length codes. The final DC value is the sum of the differential DC value and the predicted value. The AC coefficients and non-intra block DC coefficients use a different variable length code.
Inverse Scan: Coefficients are scanned during the encoding process for two reason - to allocate more bits to high energy DCT coefficients during quantization and to turn the two dimensional array (8x8) into a one dimensional array. The reverse process (i.e. inverse scan) is used on the decoding size to ensure proper dequantization and to restore the two dimensional information. There are three types of scans used; Alternate- Horizontal scan, Alternate- Vertical scan, and the Zigzag scan. The type of scan used during the decoding process will depend on the type of coefficients being decoded. Inverse AC and DC Prediction: The prediction process is only carried out for intra macro blocks. Previous intra macro blocks are used for forward prediction in order produce subsequent macro blocks. This optimization process is used to predict both DC and AC coefficients.
Inverse Quantization: The two-dimensional array of coefficients produced by the inverse scan is inverse quantized to produce the reconstructed DCT coefficients. The process is trivial; it is basically a multiplication by the quantizer step size. A variable quantizer step size can be produced by using a weighted matrix or a scale factor in order to variably allocate bits during the encoding/decoding process.
Inverse DCT (IDCT): An inverse DCT is applied in order to recover the VOP from the frequency domain (i.e. DCT coefficients) into the spatial domain (i.e. pixel values). Note that in the texture decoding process, the luminance and chrominance components of the VOP (Le. Y, Cb, Cr components) are quantized at different rates in order to reach a higher compression rate (which is the powerful aspect of the DCT transform when used in compression).
Motion Decoding and Compensation: Motion compensation is another technique used to achieve high compression. The algorithm used by MPEG-4 is block-based motion compensation to reduce the temporal redundancy between VOPs. Motion compensation in this case is two fold: if is used to predict current VOP from previous VOP, and to interpolate prediction from past and future VOPs in order to predict bidirectional VOPs. Motion vectors must be decoded to predict movement of shapes and macroblocks from one VOP to the next Motion vectors are defined for 8x8 or 16x16 regions of a VOP.
As exemplified by MPEG4 Simple Profile decoding diagram FIG. 6, but common to other video codecs, two types of Video Object Planes (VOP) are handled in digital video decoding. Video streams must begin with a frame that makes no temporal reference to any earlier frames or an Intra-Frame (I-Frame). A second type of VOP that allows temporal reference to the previous frame in the stream are known as Predicted Frames (P- Frames). Macroblocks within P-Frames may contain motion vectors to enable motion correction from the previous frame. These macroblocks often contain pixel residue information which includes corrections to the predicted pixels. Motion compensation must occur for many of the macroblocks within P-Frames and is a critical component of any video decoding mechanism. Motion vectors can be compressed using Huffman codes. These are binary Variable Length Codes (VLC) which represent values occurring with high probability with shorter binary length than values which occur with less probability. The rapid decoding of VLCs is critical to any decoding application on constrained devices. The video decoding process operating on the multimedia object players 10 and 11, decodes these VLCs in a novel use of Huffman codebooks.
Theoretical Huffman codebook process reads bits from the packet bitstream until the accumulated bits match a codeword in the codebook. This process can be thought of as logically walking the Huffman decode tree by reading one bit at a time from the bitstream, and using the bit as a decision Boolean to take the 0 branch (left side) or the 1 branch (right side). Walking this binary tree finishes when the decoding process hits a leaf in the decision tree - the result is the entry number corresponding to that leaf. Reading past the end of a packet propagates the 'end-of-stream' condition to a decoder. The novel approach taken to decode VLCs by the video decoding process operating on the multimedia object players 10 and 11, is illustrated in FIG. 7, and can be precisely described as follows:
Bits are read off the stream into an integer buffer (N). The number of bits read is equivalent to the length of the longest code in the VLC codebook. The roof of logarithm (base 2) of N is taken. Based on the result, N is shifted and used as an index into an array containing the true value indicated in the codebook and the true length of the code. The number of bits indicated as the true length is then removed from the video stream and processing continues. An example is provided:
Table B-7 of the MPEG4 Standard (Conf. [I]) contains the
Code/Value(s) pair:
Code: 00000101 Values MBType: 2 CBPC: ObI 1
The maximum length of a code in Table B-7 is 9. The above code would be read off the bit stream as (N:=) 00000101X (where X is a 'Do Not Care' bit). The roof of logarithm (base 2) of N is found to be the value 4 is then used to identify the array in which N is used as an index to locate the appropriate decoded value. N can also be shifted to remove irrelevant bits allowing the lookup array to be smaller.
This novel approach provides a very low time complexity and due to the nature of Huffman codes, a great majority of codes can be decoded with the first few tables providing a greater cache hit ratio.
Following the reading and processing of motion vectors off the video stream, motion correction must take place. Due to the high latency of memory often used in constrained devices, random memory access and non-contiguous memory access must be minimized. FIG. 8 is a flow diagram, describing video decoding process in the multimedia object players 10 and 11, which illustrates that a texture buffer large enough to contain 4 luminance and 2 chrominance blocks (the dimensions of a macroblock exemplified in the MPEG4 specification) is used to store the predicted pixels from a reference frame. This texture buffer is much smaller then the original video frame and decreases the amount of reading from and writing to non-consecutive bytes within the reference and output video frames. Ah* pixel residues are applied to the texture buffer which is then copied to the output frame. This method of processing P-frames is optimal in situations where the main processing unit has sufficient cache to store the texture information of the entire Macroblock. In cases where the limited device has very little or no on-die cache, it may be preferable to avoid using a macroblock texture buffer. Also, macroblocks with motion vector information contain pixel residue values that are often distributed in a much smaller range of values than the pixels of a texture. In cases where the device is unable to decode the video stream in real-time, a faster but less accurate HOCT algorithm can be used to process these residue values. Furthermore, to minimize the effect of the less accurate IDCT algorithm, this step is taken first on chrominance pixel residues, but can also occur for luminance pixel residues as required. The motion vector information associated with a macroblock often references a point between pixels on the reference VOP. This requires that decoders perform bilinear interpolation between pixels. This is a time consuming process requiring the sampling of four source pixels, four additions and a single divide operation for every output pixel. In addition to various arithmetic optimizations performed the video decoding process of the multimedia object players 10 and 11, shown in the flow diagram of FIG. 9, uses faster motion compensation without bilinear interpolation when less quality but faster processing is required.
Digital video codecs define Luminance and Chrominance values within a given subrange of values, MPEG4 uses [0, 255]. This allows the decoding software to store the Luminance and Chrominace pixels within a single byte of data with the correct precision. However, during the decoding process, values outside the [0,255] are often generated during motion compensation and in the inverse DCT steps. Attempting to store values outside this range results in a single byte overflows causing graphical errors in the final video output. Clipping these values and modifications to the dequantization process can be very time consuming and can result in decrease of output correctness.
FIG. 10 is a flow diagram illustrating novel optimization for the dequantization step of digital video decoding in the multimedia object players 10 and 11. The novel optimization requires the a reduction in pixel accuracy but allows values outside the range [0, 255] to be represented in a byte field without an overflow. Through analysis of various video samples, it has been found that the range [-128, 383] is sufficient to store nearly all potential resulting Luminance and Chrominance pixel values. By taking the original pixel value, adding 128 to it and dividing the result by two, values in the [-128,
383] may be represented in the [0, 255] with a decrease in accuracy of 50%. This decrease in luminance and chrominance accuracy is not a factor on many limited devices as the RGB color resolution is often in the 4-bit to 18-bit range. As an example an input pixel (nlnputPixel) in the range [-128, 383] is converted into the alternate format for storing in a byte field (nbOutputPixel): byte nbOutputPixel = (nlnputPixel +
128) / 2.
One of the most processing intensive steps of the decoding process occurs during the
IDCT step. The use of an effective integer-based algorithm is an absolute requirement when the decoding occurs on constrained devices. The Chen's EDCT algorithm is optimized, but the processing time consumed by the standard Chen implementation is to great for real-time decoding on limited devices. Hence, FIG. 11 is a flow diagram illustrating a novel use of Chen's algorithm in the multimedia object players 10 and 11.
Here several different simplified versions of Chen's algorithm be implemented, based on the energy or distribution of input DC and AC coefficients. This can result in reduced video output quality, but the effect is mitigated by giving a higher-quality preference to luminance blocks. Reduced color definition is often not as noticeable on constrained devices, and allows the chrominance blocks to be decoded with less precision. The IDCT process can be further optimized by recording which rows of the input matrix to the
IDCT are populated with values. This same mechanism can be used to ignore certain input values of insufficient energy to make a very noticeable impact on the output image and further decrease processing time.
In a limited device, the memory access required in the YUV to RGB conversion process can be sufficiently long to consume more time then any other step in the video decoding process. The video decoding process in the multimedia object players 10 and 11 uses a further step of scaling to reduce this processing, as the display size is often not the exact size of the video output. The YUV to RGB conversion and scaling steps can be combined into a single step to decrease memory access and increase the speed of video output. Several YUV to RGB functions are available providing decoding times of varying speeds and quality as well as scaling ranges. FIG. 12 is a flow diagram showing video decoding of the YUV to RGB step in the in the multimedia object players 10 and 11, as follows:
1) Separate YUV to RGB and scaling functions for cases where scaling up is required and where scaling down is required. Distinct optimizations are available for each method and added speed can be attained by separating the functionality between several different functions.
2) When scaling up is required, a minimum amount of reading from the source Luminance and Chrominance planes is desired. This is accomplished by iterating through pixels in the source plane. A fixed number of Luminance and Chrominance values in a column are read and the resulting RGB values computed for each pixel position. The pixel values are then copied in a looping fashion first by column, then by row to the output plane. This provides a way to read a single input value which may result in many output values in the output plane when scaling up.
3) Similarly, when scaling down is required a minimum amount of reading from the source Luminance and Chrominance planes is desired. This is accomplished by iterating through pixel positions in the output plane and calculating the source pixel in the input plane. This provides a way to read a single input value for every output value and minimizes the number of input-plain reads that are necessary.
4) The YUV to RGB conversion step is such a time consuming one that methods of improving the speed of computation at the expense of output quality have been implemented. Improvements in speed can be obtained by sampling only a subset of the chrominance pixels, avoiding pixel clipping or calculating the Red and Blue values for only a subset of output pixels. All of these methods are used together to provide several quality levels in the YUV to RGB step.
Hence, FIG. 13 is a flow diagram summarizing the short-cut optimization processing by the video decoding process used in the multimedia objects players 10 and 11. State information is maintained about the quality levels with which the current video stream is processed. On very limited devices short-cuts in the decoding process must be made to allow the device to maintain synchronicity between the audio and video playback. These short-cuts are specifically designed to drastically reduce the number of computations necessary at certain critical steps in the video decoding process at the cost video output quality. This mechanism allows video decoding to scale in complexity based on the processing power of the device being used. It has been found that three quality levels tests at each critical step appear to yield the best results. The highest quality is consistent with the video codec specification and displays a correct image. A medium quality level indicates that certain time consuming short-cuts are made with some impact on image quality. A low quality level indicates that drastic reductions in display quality are made to improve processing time - the output video can be unrecognizable at times and as a result this level is used only in drastic cases of a sudden drop in processor availability.
A final option exists is to avoid the processing and displaying of some or all P-Frames
This is only an option in video streams where I-Frames occur at regular intervals. Given the wide variety of processing capabilities in limited devices, this implementation strongly suggests the creation of multimedia objects from video streams with transcoder 2 specifying very regular I-Frames so that devices of very limited processing power are able to provide the client 8 with occasional frame changes.
The state information is composed of a series of integers correspond to various steps in the decoding process and define the quality at which the decoder should perform several steps. The implemented system in the multimedia players 10 and 11 consists of six of these integers:
- nYUVtoRGBQuality - Quality of the YUV to RGB conversion process
- nLumalDCTQuality - Quality of the Inverse DCT function for Luminance blocks - nChromaIDCTQuality - Quality of the Inverse DCT function for Chrominance blocks
- nLumaMCQuality - Quali ty of motion compensation for Luminance blocks
- nChromaMCQuality - Quality of motion compensation for Chrominance blocks
- nFrameRateQuality - Defines the allowance to drop frames (from a single P-Frame occuring before an I-Frame up to dropping all P- Frames)
In addition to the set of integers defining the actual quality at various steps, a single integer representing the current quality level of the overall decoding is used (named n VideoQuality in this instance). Each step quality has a very limited number of possibilities (HIGH, MEDIUM, LOW, etc), however, nVideoQuality can take on many values. At each value of nVideoQuality, a ruleset defines quality of each of the above step qualities. At the highest value of nVideoQuality, all step qualities are set to maximum. As an nVideoQuality is decreased, the step qualities are incrementally reduced according to the ruleset.
Some states of quality levels are less preferable to others. For example, it is not preferable to render many frames at the lowest quality setting of nLuminalDCTQuality - it is instead more preferable to drop frames if there is insufficient processing capability to perform nLumalDCTQuality at a higher quality. The ruleset is designed to take these possibilities into account.
FIG. 14 illustrates the general steps of audio decoding followed by the audio decoding process of the multimedia object players 10 and 11. The first step in AAC audio decoding (bit-stream de-multiplexing), which is common to other digital codecs, is to establish frame alignment. This involves finding the AAC sync word and confirming that the AAC frame does not contain any errors, if error checking is enabled in the frame. Once the frame sync is found, the bitstream is de-multiplexed or unpacked. This includes unpacking of the Huffman decoded and quantized scale factors, the M/S synthesis side information, the intensity stereo side information, the TNS coefficients, the filter bank side information and the gain control words.
Next the quantized spectral coefficients are Huffman decoded. Each coefficient must be inverse quantized by a 4/3 power nonlinearity and then scaled by the quantizer step size. The Huffman codcbooks used to decode digital audio in the multimedia object players 10 and 11 of FIG. 1, are very different from those used for digital video, but they are very similar to Huffman codebooks used in other digital audio codecs. A novel simplification of variable length fields (VLCs), used in audio decoding by the multimedia object players 10 and 11, is illustrated in FIG. 15, that allows the decoding of a single VLC value with a single array lookup. The novel approach taken is as follows:
Bits are read off the stream into an integer N. The number of bits read is equivalent to the maximum number of bits in the longest codeword in the codebook. The first binary 0 is then located starting from the highest bit. The left-based index of this first 0 is then used to remove out all the previous Is and N is shifted and used as an array Index.
For example, the AAC standard's 2nd Codebook contains the Code/Value pair:
Code: 11110110, Value: 77
The maximum length of a code in the 2nd table is 9 so when read from the BitStream the above code would appear as:
1111011OX (Where X is a "Do Not Care" bit)
The ZeroPosition of the above integer is found to be 4. The
ZeroPosition is then used to mask off the 1 bits previous to it yielding the integer "010 IX". This can then be used as an index to an array or be shifted to remove the irrelevant bits allowing the lookup array to be smaller. The next standard audio decoding step, M/S synthesis, conditionally dematrixes two channels into a stereo pair. The samples my already represent the left and right signals, in which case no computation is necessary. Otherwise the pair must be de-matrixed via one add and one subtract per sample pair in order to retrieve the proper channel coefficients.
Intensity stereo identifies regions in a channel pair that are similar, except for their position. Left-channel intensity regions must have inverse quantization and scaling applied. Right-channel intensity stereo regions use the left-channel inverse quantized and scaled coefficients, which must be re-scaled by the intensity position factors. Hence the net complexity of intensity stereo is a savings of one inverse quantization per intensity stereo coded coefficient. The next standard step, temporal noise shaping (TNS), has a variable load, depending on the number of spectral coefficients that are filtered.
Finally, the Inverse Modified Discrete Cosine Transform (IMDCT) transforms the spectral coefficients into time-domain samples. For fixed-point implementations it is required that any round-off noise is less than 1/2 LSB after the transform result is rounded to linear 16-bit values. Fixed-point realizations using 24 bit words are sufficient.
FIG. 16 illustrates Intermediate 23 and Final 24 optimizations for the digital audio
IMDCT step used by the audio decoding process in the multimedia object players 10 and 11. The audio decoder of the multimedia object players 10 and 11, combines the use of a specific Inverse Fast-Fourier Transform with Pre- and Post- processing steps. This method produces a simplified IMDCT algorithm with O(n * log(n)) runtime. This method can also incorporate the use of various IFFT algorithms based on the sparseness of input.
The following the steps describe the implementation:
[0] The IMDCT algorithm accepts an input array X of spectral coefficients in the frequency domain and outputs an array of amplitude values in the time domain twice the size of the input. The implementation of the AAC Low Complexity codec requires that the IMDCT algorithm accept input array lengths of 128 or 1024 Real values and results in an output of 256 or 2048 Real values. In the following steps, N refers to the size of the output (256 or 2048), Im(X) returns the imaginary component of some variable X and Re(X) returns the real component.
[1] The (N/2) input spectral coefficients are converted into complex numbers and stored into an array C of size (N/4). There are many approaches to this step, however, the approach taken in the described implementation pairs coefficients with one coefficient becoming the real component and one becoming the imaginary component of a complex number.
The following pseudo code describes this step:
for (n=0; n<N/4; n++) { Re(Qn]) = X[N/2 - 2*n]; Im(C[n]) = X[n];
}
[2] This result is then multiplied with scaled complex numbers on the unit circle yielding an array of size N/4. This step is described with the following pseudo code: for (n=0; n<N/4; n++) {
Re(Z) = SQRT(2/N) * cos(2*Pi*(n + 1/8) / N)
Im(Z) = SQRT(2/N) * sin(2*Pi*(n + 1/8) / N)
Re(C[n]) = Re(Qn]) * Re(Z) - Im(Qn]) * Im(Z);
Im(Qn]) = Re(Qn]) * Im(Z) + Im(Qn]) * Re(Z);
}
[3] The resulting array of complex numbers is then passed into an Inverse Fast Fourier (IFFT) Algorithm. A fixed-point IFFT algorithm is used to allow processing of IMDCT on devices which lack floating point capabilities. Most mobile devices do not allow floating point computations, and of those that do allow floating point, it is usually too slow. Due to the properties of the inverse Fourier transformation, the transformation can be calculated in a fixed point manner. In a fixed point transformation, the input is be scaled by multiplying the input values by a scale factor and then the correct output is found by multiplying by the reciprocal of the scale factor. Therefore a scaling operation is applied before and after the IFFT. A scale factor which is a power of two is chosen' so that the scaling and re-scaling operations can be accomplished by bit shift operations. Bit shifts are among the fastest operations for CPUs.
[4] Following the Inverse FFT step, elements from the complex array C must again be multiplied by complex numbers as in step [2].
[5] The values from the resulting complex array C are then stored into an array of Real numbers x of size N. The following pseudo code demonstrates the process: for (Ic=I; KN/8; l+=2) { x[2*l ] = Im(C[N/8+l]); x[2*l+l] = Re(-C[N/8-l-l]); x[2*l+2] = Im(CrN/8+l+l]); x[2*l+3] = Re(-C[N/8-2-l]); x[2*l+N/4 J = Re(C[I]); x[2*l+N/4+l] = Im(-C[N/4-l-l]); x[2*l+N/4+2] = Re(C[Rl]); x[2*l+N/4+3] = Im(-C[N/4-2-l]); x[2*l+N/2 ] = Re(C[N/8+l]); x[2*l+N/2+l] = Im(-C[N/8-l-l]); x[2*l+N/2+2] = Re(C[N/8+l+l]); x[2*l+N/2+3] = Im(-C[N/8-2-l]); x[2*l+N/2+N/4 } = Im(-C[l]); x[2*l+N/2+N/4+l] = Re(C[N/4-l-l]); x[2*l+N/2+N/4+2] = Im(-C[l+1]); x[2*l+N/2+N/4+3] = Re(C[N/4-2-l]); } As can be seen in FIG. 16, several steps in the IMDCT process can be combined. The goal of combining steps is to reduce the number of memory accesses needed to decode a frame of audio. The flow on the right shows the steps as they occur in the decoder. In summary then, the novel optimization of the IMDCT step in audio decoding shown by FIG. 16 pertains to combining steps on the Final 24 optimization side:
1. Re-order, pre-scale and twiddle: The method loops over the input data, and each datum is complex-multiplied by the twiddle factor, and is then re-scaled by doing a bit shift operation. However, the twiddle factor is already bit-shifted so it can be treated as a fixed-point number, so the scaling operation's bit shift is partially performed by the twiddle factor itself. The relevant twiddle factors are stored in an array table. Once the complex multiplication and scaling are done, the resulting values are stored in the reordered location in the IFFT input array.
2. Perform the fixed-point integer inverse Fourier transform. This transformation is the same as the transformation in the pre-combined flow.
3. Re-scale, re-order, post-twiddle, window and overlap: Combining these four operations into one step replaces four array accesses with one, and some of the multiplications are also combined into single bit shifts. This method loops over the IFFT output array, and performs four operations in each iteration of the loop: the post-twiddle and rescale are combined, because the post-twiddle uses a twiddle factor table which is already bit- shifted. Windowing is combined in this step also, with window values coming from either a table or a fast integer sine calculator. Finally, values are overlapped and stored in the correct location in the output array.
FIG. 17 illustrates simplified input shortcut processes that are specific to AAC Low Complexity (LC) profile which are used in the audio decoding process of multimedia players 10 and 11. Note, that the Mid/Side, Intensity and Temporal Noise Shaping steps, marked with cross hatches above, are optional. In cases where these three features are not present, audio decoding can further combine other steps in a novel way. These steps are marked in grey in FIG. 17. If these other steps are combined, there are no dependencies within a frame until we reach the IFFT step within IMDCT itself. Therefore, operations between noiseless decoding and the pre-IFFT operations within IMDCT itself are combined, minimizing memory access. IMDCT has four different window shapes which are common in other digital audio codecs: long only, long start, long stop, and eight short. Qf these four window sequences, only one (long only) has non-zero data in the entire output synthesis window. In the case of AAC, however, the output synthesis window always has 2048 output values.
Figure imgf000032_0001
For some window shapes, the calculations can be short-cut, avoiding the post-twiddle, windowing, re-ordering, scaling and overlapping steps entirely.
IMDCT permits two different window types: Kaiser-Bessel Derived (KBD) windows and Sine windows. KBD uses a complicated formula which cannot be computed in realtime, and is always used as a table.
Sine windows are also used from tables in most implementations. However, on a mobile device, which generally has a very small on-CPU memory cache, frequent accesses to a sine window value table will cause cache misses and degraded performance. As an alternative to using a sine lookup table to compute windowing, the FIG 18 shows the audio decoder of the multimedia objects players 10 and 11, using a bit-operation based Taylor computation, as follows:
1. Use trigonometric identities to express the sine calculation in terms of a sine in
2. the range of 0 to π / 2. Call the resulting angle X.
3. Calculate X*X. Call this value S.
4. Calculate the result as X * (256 - S * (43 - (S « I)) 5. The result produces a window value in the range of O to 255, allowing fast
6. windowing without the use of lookup tables.
7. The bit shift operations in Step 3 can be further combined with other fixed-point multiplication steps.
FIG. 19 is illustrates further IMDCT short window processing for even greater efficiency by the audio decoding process of multimedia players 10 and 11. In a sequence of eight short windows, the input of 1024 values is divided into eight short windows of 128 values, and IMDCT, windowing and overlapping is performed on each of these short windows. Each window of 128 values results in a synthesis output window of 256 values. These are then overlapped, resulting in non-zero values in the range of 448 to 1600.
The approach taken is to do every one of the IMDCT operations in sequence, rather than in parallel, storing the IMDCT results directly into the regions of the output array which will be zeroed. The output values are then windowed and overlapped. After all the eight short windows are completed, the regions of the synthesis output window which are always zero can be disregarded, due to the window shape shortcut method described above.
Finally, FIG. 20 illustrates an interleaved detection process in the audio decoding of received multimedia objects 25 by the multimedia object players 10 and 11. FIG. 20 illustrates the placement of gaps 26 at detected frames of low total energy 27 are they are detected during audio decoding by the multimedia object players 10 and 11. Hence, playback is then controlled so that the gap will occur during that frame, which may be dropped, so that synchronization with video is not lost.
In FIG. 1 the multimedia object player 10 is a downloadable Java (J2M2) applet and the described audio and video decoder optimizations and strategies, FIG. 7-13 and FIG. 15- 20, as applied to standard MPEG4 and AAC decoding make it possible for the multimedia object player 10 to playback live music and video, at acceptable frame rates (5-15fps), on limited, cell phone handsets. Java cannot take advantage of hardware capabilities in processing the huge number of calculations and variables required for either MPEG4 decoding or AAC decoding. Hence, the required optimizations for multimedia player 10, to permit the playback of AAC and MPEG4 on current mobile client hardware, are a source of technological innovation and advance. The foregoing is intended, along with the drawings, to illustrate the preferred embodiment of the invention. Those skilled in the art will be able to devise numerous arrangements which, although not explicitly shown or described herein, embody the principles of the invention and are within their spirit and scope as defined by the following claims.

Claims

What is claimed is:
1. A method of transmitting multimedia to wireless clients, wherein the multimedia transmission method depends on: the creation of multimedia objects from existing multimedia files or dynamically from live multimedia streams; a direct request, and transmission, of just the multimedia objects created from existing multimedia files or dynamically created multimedia objects from live multimedia streams by wireless client-based, multimedia object players; and, a continuous playback of the received multimedia objects by wireless client-based multimedia players that are specifically designed to play continuous sequences of the multimedia objects.
2. The method 1, running on a distributed network system for multimedia-on-demand, utilizing a centralized content server, an indexing host, multimedia object creator and transcoder for live broadcast applications or to transcode and create multimedia objects from archived multimedia files, and distributed content servers involving high capacity cellular network proxy servers and mobile clients running downloaded Java applets or embedded or downloaded non-java multimedia object players.
3. The method 1, wherein the transmission of said multimedia objects is by protocols such as:
HTTP, FTP, IMAP4 and NNTP, which have the capability to serve files in a directory structure; and where HTTP 1.1 is used and allows pipelined connections over persistent TCP connections, multimedia object players can request many multimedia objects in rapid succession.
4. The method of claim 1, wherein the wireless client-based multimedia players are downloadable Java applet multimedia object players or non-java multimedia object players, or embedded multimedia object players.
5. A method of creating multimedia objects, where, in the case of a live multimedia stream, the input multimedia stream is first transcoded into a optimal audiovisual format such as MPEG4/AAC and at an optimal encoding rate reflecting available cellular network bandwidth, then dynamically converted into multimedia objects by splitting the encoded stream into specified intervals, and then immediately transmitted to wireless clients to distributed content servers transmitting the recently created multimedia objects to wireless clients; alternatively, in the case of converting an archived multimedia file, the input multimedia stream is first transcoded into a optimal audiovisual format such as MPEG4/AAC and at an optimal encoding rate reflecting available cellular network bandwidth, and then converted into multimedia objects by splitting the encoded stream into specified intervals.
6. The method of 5, wherein the dynamically created multimedia objects are maintained by content servers serving the multimedia objects to wireless clients, as a windows of multimedia objects, during transmission to wireless clients.
7. The method 5, wherein the input multimedia stream is scanned after specified intervals for the next I-frame, and the multimedia segment is split at that next I-frame to create another multimedia object.
8. The method 5, wherein the input multimedia stream can be in analog audiovisual format or a variety of digital audiovisual formats, including MPEG4, MPEGl, MPEG2, MOV, AVI, WMV, ASF, and higher encoded MPEG4, or just audio formats, including analog audio, mp3, AMR, Windows Media Audio, RealAudio and higher encoded AAC.
9. The method 6, wherein a window of multimedia objects for live transmission is created and comprised of a small series of multimedia objects, which can be incremented and decremented as newly created objects are introduced to the window or transmitted to wireless clients.
10. The method 5, wherein multimedia objects are identified when they are created by the multimedia object creator with an Internet address that includes such information as: the transport protocol; the varying host URL, if there are many content servers involved as in a live broadcast application, of the transmission server or content server directly serving the wireless client; the name of the multimedia object sequence or broadcast; the number of multimedia objects in the sequence; and, the multimedia object's sequence number.
11. The method 5, whereby multimedia objects are spit from multiple MPEG4 composite layer streams by scanning time intervals and splitting them at next I-frames.
12. The method 5, whereby audio media objects are split from a single audio stream by splitting at set time intervals.
13. A method of wireless client side processing of multimedia objects by multimedia object player, wherein : the Identification of the multimedia object is parsed and, the total number of multimedia objects within the Identification path is determined, or the number of multimedia objects in window is determined for live applications; heap memory allocations for said multimedia objects and meta-data are determined to create a buffer on the wireless client for more than one multimedia object; to identify multimedia object playing, multimedia object receiving and multimedia wait for states for the multimedia object sequence; to hence use these states as a mechanism to synchronize the reception and playback of multimedia objects; to pass this information onto the audio and/or video decoding components of the multimedia player to properly configure them to uniquely process the sequence of multimedia objects.
14. The method 13, whereby, following configuration of audio and/or video decoding components for a specific sequence of multimedia objects, the multimedia object player can delay playback until the multimedia object buffers in the wireless client memory have filled or can begin playback immediately while requesting the next multimedia object; and, whereby, the multimedia object player decision can be based on the speed at which the multimedia objects are retrieved versus the playback time of each multimedia object, the latency of requests for multimedia objects, or the number of multimedia objects that can be stored in wireless client memory at once.
15. The method 13, following the parsing of the first multimedia object, its audio and video contents of the first and each subsequent multimedia object in the sequence are decoded and played back, whereby sufficient audio frames are decoded that their total display time is as long as the associated video frame and processing time of the next audio frame; and, whereby interleaving the processing between several audio frames and a single video frame, the multimedia object player can perform audio and video decoding in a single thread.
16. The method 13, whereby state information, also provides a mechanism that can be used to skip backwards and forwards through a multimedia object sequence, wherein changing the state information and restarting retrieval of multimedia objects, repositions playback from any multimedia object in the sequence; and , wherein the transmission is a live transmission, state information can reposition playback from any multimedia object within a current window.
17. A method for processing the large scale distribution of multimedia content in the distributed network being managed by an indexing host server, wherein: the indexing host registers all URLs of content servers supporting particular live multimedia object transmissions and archived sequences of multimedia objects; remote transcoding/multimedia object creating servers provide registered updates of multimedia object sequence indices to the indexing host; remote transcoding/multimedia object creating servers also register the sequence indices of the most recent windows of live content multimedia objects with the indexing host; wherein content servers accept and store the most current window of live content multimedia objects or the most recent non-live archives of multimedia object sequences; content servers transmit their multimedia directly to wireless clients, or indirectly through cellular network proxy servers; and whereby, the indexing host verifies the wireless client; the indexing host accepts requests from wireless clients for multimedia content; the indexing host determines the most suitable content server for the wireless client; and, the indexing host provides the wireless client with a decryption string for the requested multimedia content.
18. A method of optimized video decoding in decoding Variable Length Codes (VLCs) in Huffman codebooks which are used to compress Motion Vectors for motion compensation occurring in many macroblocks within P-frames, whereby, bits are read off the main video stream into an integer buffer (N); the number of bits read is equivalent to the longest code in the VLC codebook; the roof of logarithm (base 2) of N is taken; based on the result, N is shifted and used as an index into an array containing the true value indicated in the codebook and the true length of the code; the number of bits indicated as the true length is then removed from the video stream and processing continues.
19. A method of optimized video decoding using a texture buffer large enough to 4 luminance and 2 chrominance blocks (the dimensions of a macroblock exemplified in
MPEG4 specification) to store predicted pixels from a reference frame; said texture buffer decreases the amount of reading from and writing to non-consecutive bytes within the reference and output video frames; all pixel residues are applied to the texture buffer which is then copied to the output frame; to use a faster but less accurate IDCT algorithm with the process if the wireless handset cannot decode the video stream in real-time, to process these residue values; furthermore, to minimize the effect of the less accurate IDCT algorithm but using this process first on the chrominance pixel residues.
20. A method of video decoding to perform faster motion compensation without bilinear interpolation when less quality but faster processing is required.
21. A method of optimized digital video decoding for optimizations in pixel processing and dequantization, whereby original luminance and chrominance values are taken and 128 added and the result divided by 2; values in the [-128,383] range are then represented in the [0,255] range, decreasing luminance and chrominance accuracy without significantly affected RGB color resolution in the 4-bit to 18-bit range.
22. A method of optimizing video decoding by optimizing Chen's algorithm, whereby, different simplified versions of Chen's algorithm are used, based on the energy input or distribution of input DC and AC coefficients, whereby, the energy or distribution of DC and AC coefficients is first assessed; a simplified Chen's algorithm is selected for IDCT processing; a higher quality preference is given to luminance blocks; and, the process is further optimized by recording rows of the input matrix to the IDCT that are populated with values.
23. A method of optimizing video decoding in the handling YUV to RGB conversion, whereby, YUV and RGB scaling functions are separated; when scaling up, pixels are read on the source plane and copied to the output plane; when scaling down, iteration is performed through pixel positions in the output plane and source pixels are calculated in the input plane; and, sampling is performed on only a subset of chrominance pixels, avoiding pixel clipping or calculating the Red and Blue values for only a subset of output pixels.
24. A method of optimizing video decoding processing by using short-cuts to permit video decoding to scale in complexity, based on the processing power of the wireless client, whereby, three quality levels are used with high being consistent with a correct image in the digital codec specification; medium corresponds to some reduction in image quality to reduce processing time; and low being a drastic reduction in image quality to improve processing time; wherein a final option is to avoid the processing and display of P-frames when I -frames occur at regular intervals.
25. A method of optimizing video decoding process by using short-cuts to permit video decoding to scale in complexity, based on the processing power of the wireless client, where state information defines the quality at which decoding should be performed at several steps of the decoding process.
26. The method 25, consisting of six integer value steps defining state: Quality of the YUV to RGB conversion process;
Quality of the Inverse DCT for luminance blocks;
Quality of the Inverse DCT function for chrominance blocks;
Quality of Motion Compensation for luminance blocks;
Quality of Motion Compensation for chrominance blocks; and, allowance to drop frames (from a single P-frame occurring before an I-Frame up to dropping all P-Frames).
27. The method 25, further including a single integer representing the quality level of the overall encoding, wherein, at each value of overall quality, a ruleset defines quality for each of the step qualities; and, at the highest overall quality, all step qualities are set to maximum; and, as overall quality is decreased, step qualities are incrementally reduced according to the ruleset.
28. A method of optimized audio decoding pertaining to a simplification of variable length codes (VLCs) in Huffman codebooks, wherein, bits are read off the audio stream into an integer N; the number of bits read is equivalent to maximum number of bits in the longest codeword in the codebook; the first binary 0 is then located starting from the highest bit; the left-based index of this first 0 is then used to remove all the previous Is; and, N is shifted and used as an array index.
29. A method optimized digital audio decoding for optimizations in the IMDCT step, whereby, the Inverse Fast Fourier Transform is combined with pre- and post-processing steps to produce a simplified IMDCT algorithm with O(n*nlog(n)) runtime, which can incorporate various IFFT algorithms based on the sparseness of input, and, which specifically involves the following combination of steps in a final optimization: a) Re-order, pre-scale and twiddle, whereby, the method loops over the input data, and each datum is complex-multiplied by the twiddle factor, and is then re-scaled by doing a bit shift operation; and, the twiddle factor is already bit-shifted so it can be treated as a fixed-point number, so the scaling operation's bit shift is partially performed by the twiddle factor itself; and the relevant twiddle factors are stored in an array table; and finally, once the complex multiplication and scaling are done, the resulting values are stored in the re-ordered location in the IFFT input array; b) Perform the fixed-point integer inverse Fourier transform; c) Re-scale, re-order, post-twiddle, window and overlap, whereby combining these four operations into one step replaces four array accesses with one, and some multiplications are also combined into single bit shifts; and hence, the method loops over the IFFT output array, and performs four operations in each iteration of the loop: the post-twiddle and rescale are combined; the post-twiddle uses a twiddle factor table which is already bit-shifted; and, windowing is combined in this step also, with window values coming from either a table or a fast integer sine calculator, and finally, values are overlapped and stored in the correct location in the output array.
30. A method of simplified input processing specific to AAC Low Complexity (LC) audio decoding profile, wherein, Mid/Side, Intensity and Temporal Noise Shaping steps, are optional; in cases where these three features are not present, there are no dependencies within a frame until the IFFT step within IMDCT itself; and, operations between noiseless decoding and the pre-IFFT operations within IMDCT itself are combined, minimizing memory access.
31. A method of audio decoding using an alternative bit-operation based upon Taylor computation, wherein, trigonometric identities are used to express the sine calculation in terms of a sine in the range of 0 to PI/2, resulting in angle X;
X is multiplied by X, resulting in S; perform a bit-shift operation by calculating X*(256-S*(43-(S«1)); the result producing a window value in the range of 0 to 255, allowing fast windowing without the use of lookup tables; and, combining the bit-shift operation with other fixed-point multiplication steps.
32. A method of IMDCT short window processing for digital audio decoding, wherein, IMDCT 1024 values are divided into sequences of 8 short windows;
IMDCT window and overlap functions are performed on each short window; each window of 128 values results in a synthesis output window of 256 values; these output windows are then overlapped, resulting in a non-zero values in the range of
448 to 1600.
33. A method of low energy gap timing in audio playback, wherein, an interleaved process in audio decoding detects frames of low energy; audio playback is controlled so a gap will occur during the detected frames, which may be dropped so that synchronization with video is not lost.
PCT/CA2005/000617 2005-04-22 2005-04-22 Multimedia system for mobile client platforms WO2006110975A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CA2005/000617 WO2006110975A1 (en) 2005-04-22 2005-04-22 Multimedia system for mobile client platforms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CA2005/000617 WO2006110975A1 (en) 2005-04-22 2005-04-22 Multimedia system for mobile client platforms

Publications (2)

Publication Number Publication Date
WO2006110975A1 true WO2006110975A1 (en) 2006-10-26
WO2006110975A8 WO2006110975A8 (en) 2008-01-24

Family

ID=37114645

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2005/000617 WO2006110975A1 (en) 2005-04-22 2005-04-22 Multimedia system for mobile client platforms

Country Status (1)

Country Link
WO (1) WO2006110975A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009055899A1 (en) * 2007-11-02 2009-05-07 Ecole De Technologie Superieure System and method for quality-aware selection of parameters in transcoding of digital images
WO2010048703A1 (en) * 2008-10-30 2010-05-06 Raja Tuli Image transmission method
US8224104B2 (en) 2007-11-02 2012-07-17 Ecole De Technologie Superieure System and method for predicting the file size of images subject to transformation by scaling and a change of quality-controlling parameters
US8270739B2 (en) 2007-12-03 2012-09-18 Ecole De Technologie Superieure System and method for quality-aware selection of parameters in transcoding of digital images
US8300961B2 (en) 2008-12-12 2012-10-30 Ecole De Technologie Superieure Method and system for low complexity transcoding of images with near optimal quality
WO2013007943A1 (en) * 2011-07-12 2013-01-17 France Telecom Adaptations of analysis or synthesis weighting windows for transform coding or decoding
US20130034146A1 (en) * 2011-08-02 2013-02-07 Hyeonkuk Jeong Selection of encoder and decoder for a video communications session
WO2014137372A1 (en) * 2013-03-08 2014-09-12 Thomson Licensing Method and apparatus for using a list driven selection process to improve video and media time based editing
WO2016042463A1 (en) * 2014-09-15 2016-03-24 P. N. C. Digital Limited A computer implemented system and method for transferring multimedia content
US9338450B2 (en) 2013-03-18 2016-05-10 Ecole De Technologie Superieure Method and apparatus for signal encoding producing encoded signals of high fidelity at minimal sizes
US9661331B2 (en) 2013-03-18 2017-05-23 Vantrix Corporation Method and apparatus for signal encoding realizing optimal fidelity
US10609405B2 (en) 2013-03-18 2020-03-31 Ecole De Technologie Superieure Optimal signal encoding based on experimental data
CN112002335A (en) * 2010-12-03 2020-11-27 杜比实验室特许公司 Audio decoding method and apparatus and method for processing media data
CN114245192A (en) * 2021-12-20 2022-03-25 北京数码视讯软件技术发展有限公司 Multi-process decoding playing method and device
US11461070B2 (en) 2017-05-15 2022-10-04 MIXHalo Corp. Systems and methods for providing real-time audio and data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6138009A (en) * 1997-06-17 2000-10-24 Telefonaktiebolaget Lm Ericsson System and method for customizing wireless communication units
WO2001031497A1 (en) * 1999-10-22 2001-05-03 Activesky, Inc. An object oriented video system
US20020069218A1 (en) * 2000-07-24 2002-06-06 Sanghoon Sull System and method for indexing, searching, identifying, and editing portions of electronic multimedia files
US20020193127A1 (en) * 1999-11-17 2002-12-19 Andreas Martschitsch Method and system for preparing and transmitting SMS messages in a mobile radio network
US6845398B1 (en) * 1999-08-02 2005-01-18 Lucent Technologies Inc. Wireless multimedia player

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6138009A (en) * 1997-06-17 2000-10-24 Telefonaktiebolaget Lm Ericsson System and method for customizing wireless communication units
US6845398B1 (en) * 1999-08-02 2005-01-18 Lucent Technologies Inc. Wireless multimedia player
WO2001031497A1 (en) * 1999-10-22 2001-05-03 Activesky, Inc. An object oriented video system
US20020193127A1 (en) * 1999-11-17 2002-12-19 Andreas Martschitsch Method and system for preparing and transmitting SMS messages in a mobile radio network
US20020069218A1 (en) * 2000-07-24 2002-06-06 Sanghoon Sull System and method for indexing, searching, identifying, and editing portions of electronic multimedia files

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101459395B1 (en) 2007-11-02 2014-11-10 에꼴 드 테크놀로지 수페리에르 System and method for quality-aware selection of parameters in transcoding of digital images
US8374443B2 (en) 2007-11-02 2013-02-12 Ecole De Technologie Superieure System and method for predicting the file size of images subject to transformation by scaling and a change of quality-controlling parameters
CN101842812B (en) * 2007-11-02 2012-05-30 高等技术学校 System and method for quality-aware selection of parameters in transcoding of digital images
US8224104B2 (en) 2007-11-02 2012-07-17 Ecole De Technologie Superieure System and method for predicting the file size of images subject to transformation by scaling and a change of quality-controlling parameters
WO2009055899A1 (en) * 2007-11-02 2009-05-07 Ecole De Technologie Superieure System and method for quality-aware selection of parameters in transcoding of digital images
US8666183B2 (en) 2007-12-03 2014-03-04 Ecole De Technologie Superieur System and method for quality-aware selection of parameters in transcoding of digital images
US8295624B2 (en) 2007-12-03 2012-10-23 Ecole De Technologie Superieure Method and system for generating a quality prediction table for quality-aware transcoding of digital images
US8270739B2 (en) 2007-12-03 2012-09-18 Ecole De Technologie Superieure System and method for quality-aware selection of parameters in transcoding of digital images
US8559739B2 (en) 2007-12-03 2013-10-15 Ecole De Technologie Superieure System and method for quality-aware selection of parameters in transcoding of digital images
WO2010048703A1 (en) * 2008-10-30 2010-05-06 Raja Tuli Image transmission method
US8300961B2 (en) 2008-12-12 2012-10-30 Ecole De Technologie Superieure Method and system for low complexity transcoding of images with near optimal quality
US8660339B2 (en) 2008-12-12 2014-02-25 Ecole De Technologie Superieure Method and system for low complexity transcoding of image with near optimal quality
CN112002335A (en) * 2010-12-03 2020-11-27 杜比实验室特许公司 Audio decoding method and apparatus and method for processing media data
US10373622B2 (en) 2011-07-12 2019-08-06 Orange Coding and decoding devices and methods using analysis or synthesis weighting windows for transform coding or decoding
WO2013007943A1 (en) * 2011-07-12 2013-01-17 France Telecom Adaptations of analysis or synthesis weighting windows for transform coding or decoding
RU2607230C2 (en) * 2011-07-12 2017-01-10 Оранж Adaptation of weighing analysis or synthesis windows for encoding or decoding by conversion
US9368121B2 (en) 2011-07-12 2016-06-14 Orange Adaptations of analysis or synthesis weighting windows for transform coding or decoding
FR2977969A1 (en) * 2011-07-12 2013-01-18 France Telecom ADAPTATION OF ANALYSIS OR SYNTHESIS WEIGHTING WINDOWS FOR TRANSFORMED CODING OR DECODING
US10045089B2 (en) * 2011-08-02 2018-08-07 Apple Inc. Selection of encoder and decoder for a video communications session
US20130034146A1 (en) * 2011-08-02 2013-02-07 Hyeonkuk Jeong Selection of encoder and decoder for a video communications session
WO2014137372A1 (en) * 2013-03-08 2014-09-12 Thomson Licensing Method and apparatus for using a list driven selection process to improve video and media time based editing
AU2013381005B2 (en) * 2013-03-08 2017-09-14 Interdigital Ce Patent Holdings Method and apparatus for using a list driven selection process to improve video and media time based editing
US9661331B2 (en) 2013-03-18 2017-05-23 Vantrix Corporation Method and apparatus for signal encoding realizing optimal fidelity
US9615101B2 (en) 2013-03-18 2017-04-04 Ecole De Technologie Superieure Method and apparatus for signal encoding producing encoded signals of high fidelity at minimal sizes
US9338450B2 (en) 2013-03-18 2016-05-10 Ecole De Technologie Superieure Method and apparatus for signal encoding producing encoded signals of high fidelity at minimal sizes
US10609405B2 (en) 2013-03-18 2020-03-31 Ecole De Technologie Superieure Optimal signal encoding based on experimental data
WO2016042463A1 (en) * 2014-09-15 2016-03-24 P. N. C. Digital Limited A computer implemented system and method for transferring multimedia content
US11461070B2 (en) 2017-05-15 2022-10-04 MIXHalo Corp. Systems and methods for providing real-time audio and data
US11625213B2 (en) 2017-05-15 2023-04-11 MIXHalo Corp. Systems and methods for providing real-time audio and data
CN114245192A (en) * 2021-12-20 2022-03-25 北京数码视讯软件技术发展有限公司 Multi-process decoding playing method and device
CN114245192B (en) * 2021-12-20 2024-02-02 北京数码视讯软件技术发展有限公司 Multi-process decoding and playing method and device

Also Published As

Publication number Publication date
WO2006110975A8 (en) 2008-01-24

Similar Documents

Publication Publication Date Title
US10771849B2 (en) Multimedia system for mobile client platforms
WO2006110975A1 (en) Multimedia system for mobile client platforms
EP2596633B1 (en) A media streaming apparatus
KR101644208B1 (en) Video encoding using previously calculated motion information
KR100681168B1 (en) System and method for encoding and decoding residual signals for fine granular scalable video
CN102301710B (en) Multiple bit rate video encoding using variable bit rate and dynamic resolution for adaptive video streaming
CN100370802C (en) Bit-plane dependent signal compression
KR101376666B1 (en) Enhancing image quality
US9113177B2 (en) Methods, apparatuses and computer program products for pausing video streaming content
KR20010080644A (en) System and Method for encoding and decoding enhancement layer data using base layer quantization data
CN101077011A (en) System and method for real-time transcoding of digital video for fine-granular scalability
US9818422B2 (en) Method and apparatus for layered compression of multimedia signals for storage and transmission over heterogeneous networks
CN102783147A (en) Budget encoding
EP2022045A2 (en) Decoding of predictively coded data using buffer adaptation
AU2008202703A1 (en) Apparatus and method for providing multimedia content
JP2005513928A (en) Complexity scalability (FGS) for fine grain video coding
CN1726644A (en) Apparatus and method for multiple description encoding
Su et al. A practical design of high-volume steganography in digital video files
WO2003058973A2 (en) Zigzag scan order for image/video encoder and decoder
US8345746B2 (en) Video quantizer unit and method thereof
Webb et al. Video and Audio Coding for Mobile Applications
KR100552169B1 (en) Video streaming signal compression device of mobile telecommunication system
Chiariglione Moving picture experts group (mpeg)
Pejhan et al. Online rate control for video streams
Mahalakshmi et al. INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Data Compression in Multimedia (Text, Image, Audio and Video)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005738274

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

WWW Wipo information: withdrawn in national office

Ref document number: 2005738274

Country of ref document: EP

122 Ep: pct application non-entry in european phase

Ref document number: 05738274

Country of ref document: EP

Kind code of ref document: A1