To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Moving Picture Experts Group

From Wikipedia, the free encyclopedia

MPEG logo
MPEG logo
MPEG Format is used on several media. This picture relates some of the most known media to the MPEG Format version and container format (TS and PS) used.
MPEG Format is used on several media. This picture relates some of the most known media to the MPEG Format version and container format (TS and PS) used.

The Moving Picture Experts Group (MPEG) is a working group of authorities that was formed by ISO and IEC to set standards for audio and video compression and transmission.[1] It was established in 1988 by the initiative of Hiroshi Yasuda (Nippon Telegraph and Telephone) and Leonardo Chiariglione,[2] group Chair since its inception. The first MPEG meeting was in May 1988 in Ottawa, Canada.[3][4][5] As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions.[needs update] MPEG's official designation is ISO/IEC JTC 1/SC 29/WG 11 – Coding of moving pictures and audio (ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 11).[6][7][8][9]

YouTube Encyclopedic

  • 1/5
    Views:
    1 831 406
    2 268
    1 097
    2 124 745
    1 552
  • ✪ Is Most Published Research Wrong?
  • ✪ Can We Create Good Institutions?
  • ✪ Engineering Serendipity: How to Tackle Scientific Problems in the 21st Century
  • ✪ 2016 Isaac Asimov Memorial Debate: Is the Universe a Simulation?
  • ✪ Rouse Visiting Artist Lecture: Hans Ulrich Obrist

Transcription

In 2011 an article was published in the reputable "Journal of Personality and Social Psychology". It was called "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect" or, in other words, proof that people can see into the future. The paper reported on nine experiments. In one, participants were shown two curtains on a computer screen and asked to predict which one had an image behind it, the other just covered a blank wall. Once the participant made their selection the computer randomly positioned an image behind one of the curtains, then the selected curtain was pulled back to show either the image or the blank wall the images were randomly selected from one of three categories: neutral, negative, or erotic. If participants selected the curtain covering the image this was considered a hit. Now with there being two curtains and the images positions randomly behind one of them, you would expect the hit rate to be about fifty percent. And that is exactly what the researchers found, at least for negative neutral images however for erotic images the hit rate was fifty-three percent. Does that mean that we can see into the future? Is that slight deviation significant? Well to assess significance scientists usually turn to p-values, a statistic that tells you how likely a result, at least this extreme, is if the null hypothesis is true. In this case the null hypothesis would just be that people couldn't actually see into the future and the 53-percent result was due to lucky guesses. For this study the p-value was .01 meaning there was just a one-percent chance of getting a hit rate of fifty-three percent or higher from simple luck. p-values less than .05 are generally considered significant and worthy of publication but you might want to use a higher bar before you accept that humans can accurately perceive the future and, say, invite the study's author on your news program; but hey, it's your choice. After all, the .05 threshold was arbitrarily selected by Ronald Fisher in a book he published in 1925. But this raises the question: how much of the published research literature is actually false? The intuitive answer seems to be five percent. I mean if everyone is using p less than .05 as a cut-off for statistical significance, you would expect five of every hundred results to be false positives but that unfortunately grossly underestimates the problem and here's why. Imagine you're a researcher in a field where there are a thousand hypotheses currently being investigated. Let's assume that ten percent of them reflect true relationships and the rest are false, but no one of course knows which are which, that's the whole point of doing the research. Now, assuming the experiments are pretty well designed, they should correctly identify around say 80 of the hundred true relationships this is known as a statistical power of eighty percent, so 20 results are false negatives, perhaps the sample size was too small or the measurements were not sensitive enough. Now considered that from those 900 false hypotheses using a p-value of .05, forty-five false hypotheses will be incorrectly considered true. As for the rest, they will be correctly identified as false but most journals rarely published no results: they make up just ten to thirty percent of papers depending on the field, which means that the papers that eventually get published will include 80 true positive results: 45 false positive results and maybe 20 true negative results. Nearly a third of published results will be wrong even with the system working normally, things get even worse if studies are underpowered, and analysis shows they typically are, if there is a higher ratio of false-to-true hypotheses being tested or if the researchers are biased. All of this was pointed out in 2005 paper entitled "Why most published research is false". So, recently, researchers in a number of fields have attempted to quantify the problem by replicating some prominent past results. The Reproducibility Project repeated a hundred psychology studies but found only thirty-six percent had a statistically significant result the second time around and the strength of measured relationships were on average half those of the original studies. An attempted verification of 53 studies considered landmarks in the basic science of cancer only managed to reproduce six even working closely with the original study's authors these results are even worse than i just calculated the reason for this is nicely illustrated by a 2015 study showing that eating a bar of chocolate every day can help you lose weight faster. In this case the participants were randomly allocated to one of three treatment groups: one went on a low-carb diet, another one on the same low carb diet plus a 1.5 ounce bar of chocolate per day and the third group was the control, instructed just to maintain their regular eating habits at the end of three weeks the control group had neither lost nor gained weight but both low carb groups had lost an average of five pounds per person the group that a chocolate however lost weight ten percent faster than the non-chocolate eaters the finding was statistically significant with a p-value less than .05 As you might expect this news spread like wildfire, to the front page of Bild, the most widely circulated daily newspaper in Europe and into the Daily Star, the Irish Examiner, to Huffington Post and even Shape Magazine unfortunately the whole thing had been faked, kind of. I mean researchers did perform the experiment exactly as they described, but they intentionally designed it to increase the likelihood of false positives: the sample size was incredibly small, just five people per treatment group, and for each person 18 different measurements were tracked including: weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, and so on; so if weight loss didn't show a significant difference there were plenty of other factors that might have. So the headline could have been "chocolate lowers cholesterol" or "increases sleep quality" or... something. The point is: a p-value is only really valid for a single measure once you're comparing a whole slew of variables the probability that at least one of them gives you a false positive goes way up, and this is known as "p-hacking". Researchers can make a lot of decisions about their analysis that can decrease the p-value, for example let's say you analyze your data and you find it nearly reaches statistical significance, so you decide to collect just a few more data points to be sure then if the p-value drops below .05 you stop collecting data, confident that these additional data points could only have made the result more significant if there were really a true relationship there, but numerical simulations show that relationships can cross the significance threshold by adding more data points even though a much larger sample would show that there really is no relationship. In fact, there are a great number of ways to increase the likelihood of significant results like: having two dependent variables, adding more observations, controlling for gender, or dropping one of three conditions combining all three of these strategies together increases the likelihood of a false-positive to over sixty percent, and that is using p less than .05 Now if you think this is just a problem for psychology neuroscience or medicine, consider the pentaquark, an exotic particle made up of five quarks, as opposed to the regular three for protons or neutrons. Particle physics employs particularly stringent requirements for statistical significance referred to as 5-sigma or one chance in 3.5 million of getting a false positive, but in 2002 a Japanese experiment found evidence for the Theta-plus pentaquark, and in the two years that followed 11 other independent experiments then looked for and found evidence of that same pentaquark with very high levels of statistical significance. From July 2003 to May 2004 a theoretical paper on pentaquarks was published on average every other day, but alas, it was a false discovery for their experimental attempts to confirm that theta-plus pentaquark using greater statistical power failed to find any trace of its existence. The problem was those first scientists weren't blind to the data, they knew how the numbers were generated and what answer they expected to get, and the way the data was cut and analyzed, or p-hacked, produced the false finding. Now most scientists aren't p-hacking maliciously, there are legitimate decisions to be made about how to collect, analyze and report data, and these decisions impact on the statistical significance of results. For example, 29 different research groups were given the same data and asked to determine if dark-skinned soccer players are more likely to be given red cards; using identical data some groups found there was no significant effect while others concluded dark-skinned players were three times as likely to receive a red card. The point is that data doesn't speak for itself, it must be interpreted. Looking at those results it seems that dark skinned players are more likely to get red carded but certainly not three times as likely; consensus helps in this case but for most results only one research group provides the analysis and therein lies the problem of incentives: scientists have huge incentives to publish papers, in fact their careers depend on it; as one scientist Brian Nosek puts it: "There is no cost to getting things wrong, the cost is not getting them published". Journals are far more likely to publish results that reach statistical significance so if a method of data analysis results in a p-value less than .05 then you're likely to go with that method, publication's also more likely if the result is novel and unexpected, this encourages researchers to investigate more and more unlikely hypotheses which further decreases the ratio of true to spurious relationships that are tested; now what about replication? Isn't science meant to self-correct by having other scientists replicate the findings of an initial discovery? In theory yes but in practice it's more complicated, like take the precognition study from the start of this video: three researchers attempted to replicate one of those experiments, and what did they find? well, surprise surprise, the hit rate they obtained was not significantly different from chance. When they tried to publish their findings in the same journal as the original paper they were rejected. The reason? The journal refuses to publish replication studies. So if you're a scientist the successful strategy is clear and don't even attempt replication studies because few journals will publish them, and there is a very good chance that your results won't be statistically significant any way in which case instead of being able to convince colleagues of the lack of reproducibility of an effect you will be accused of just not doing it right. So a far better approach is to test novel and unexpected hypotheses and then p-hack your way to a statistically significant result. Now I don't want to be too cynical about this because over the past 10 years things have started changing for the better. Many scientists acknowledge the problems i've outlined and are starting to take steps to correct them: there are more large-scale replication studies undertaken in the last 10 years, plus there's a site, Retraction Watch, dedicated to publicizing papers that have been withdrawn, there are online repositories for unpublished negative results and there is a move towards submitting hypotheses and methods for peer review before conducting experiments with the guarantee that research will be published regardless of results so long as the procedure is followed. This eliminates publication bias, promotes higher powered studies and lessens the incentive for p-hacking. The thing I find most striking about the reproducibility crisis in science is not the prevalence of incorrect information in published scientific journals after all getting to the truth we know is hard and mathematically not everything that is published can be correct. What gets me is the thought that even trying our best to figure out what's true, using our most sophisticated and rigorous mathematical tools: peer review, and the standards of practice, we still get it wrong so often; so how frequently do we delude ourselves when we're not using the scientific method? As flawed as our science may be, it is far away more reliable than any other way of knowing that we have. This episode of veritasium was supported in part by these fine people on Patreon and by Audible.com, the leading provider of audiobooks online with hundreds of thousands of titles in all areas of literature including: fiction, nonfiction and periodicals, Audible offers a free 30-day trial to anyone who watches this channel, just go to audible.com/veritasium so they know i sent you. A book i'd recommend is called "The Invention of Nature" by Andrea Wolf which is a biography of Alexander von Humboldt, an adventurer and naturalist who actually inspired Darwin to board the Beagle; you can download that book or any other of your choosing for a one month free trial at audible.com/veritasium so as always i want to thank Audible for supporting me and I really want to thank you for watching.

Contents

Sub Groups

ISO/IEC JTC1/SC29/WG11 – Coding of moving pictures and audio has following Sub Groups (SG):[6]

  • Requirements
  • Systems
  • Video
  • Audio
  • 3D Graphics Compression
  • Test
  • Communication

Cooperation with other groups

Joint Video Team

Joint Video Team (JVT) is joint project between ITU-T SG16/Q.6 (Study Group 16 / Question 6) – VCEG (Video Coding Experts Group) and ISO/IEC JTC1/SC29/WG11 – MPEG for the development of new video coding recommendation and international standard.[6][10] It was formed in 2001 and its main result has been H.264/MPEG-4 AVC (MPEG-4 Part 10).[11]

Joint Collaborative Team on Video Coding

Joint Collaborative Team on Video Coding (JCT-VC) is a group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG). It was created in 2010 to develop High Efficiency Video Coding, a new generation video coding standard that further reduces (by 50%) the data rate required for high quality video coding, as compared to the current ITU-T H.264 / ISO/IEC 14496-10 standard.[12][13] JCT-VC is co-chaired by Jens-Rainer Ohm and Gary Sullivan.

Standards

The MPEG standards consist of different Parts. Each part covers a certain aspect of the whole specification.[14] The standards also specify Profiles and Levels. Profiles are intended to define a set of tools that are available, and Levels define the range of appropriate values for the properties associated with them.[15] Some of the approved MPEG standards were revised by later amendments and/or new editions. MPEG has standardized the following compression formats and ancillary standards:

  • MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s (ISO/IEC 11172). This initial version is known as a lossy fileformat and is the first MPEG compression standard for audio and video. It is commonly limited to about 1.5 Mbit/s although the specification is capable of much higher bit rates. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a Compact Disc. It is used on Video CD and can be used for low-quality video on DVD Video. It was used in digital satellite/cable TV services before MPEG-2 became widespread. To meet the low bit requirement, MPEG-1 downsamples the images, as well as uses picture rates of only 24–30 Hz, resulting in a moderate quality.[16] It includes the popular MPEG-1 Audio Layer III (MP3) audio compression format.
  • MPEG-2 (1995): Generic coding of moving pictures and associated audio information (ISO/IEC 13818). Transport, video and audio standards for broadcast-quality television. MPEG-2 standard was considerably broader in scope and of wider appeal – supporting interlacing and high definition. MPEG-2 is considered important because it has been chosen as the compression scheme for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD and DVD Video.[16] It is also used on Blu-ray Discs, but these normally use MPEG-4 Part 10 or SMPTE VC-1 for high-definition content.
  • MPEG-3: MPEG-3 dealt with standardizing scalable and multi-resolution compression[16] and was intended for HDTV compression but was found to be redundant and was merged with MPEG-2; as a result there is no MPEG-3 standard.[16][17] MPEG-3 is not to be confused with MP3, which is MPEG-1 or MPEG-2 Audio Layer III.
  • MPEG-4 (1998): Coding of audio-visual objects. (ISO/IEC 14496) MPEG-4 provides a framework for more advanced compression algorithms potentially resulting in higher compression ratios compared to MPEG-2 at the cost of higher computational requirements. MPEG-4 supports Intellectual Property Management and Protection (IPMP), which provides the facility to use proprietary technologies to manage and protect content like digital rights management.[18] It also supports MPEG-J, a fully programmatic solution for creation of custom interactive multimedia applications (Java application environment with a Java API) and many other features.[19][20][21] Several new higher-efficiency video standards (newer than MPEG-2 Video) are included, notably:

MPEG-4 has been chosen as the compression scheme for over-the-air in Brazil (ISDB-TB), based on original digital television from Japan (ISDB-T).[22]

In addition, the following standards, while not sequential advances to the video encoding standard as with MPEG-1 through MPEG-4, are referred to by similar notation:

  • MPEG-7 (2002): Multimedia content description interface. (ISO/IEC 15938)
  • MPEG-21 (2001): Multimedia framework (MPEG-21). (ISO/IEC 21000) MPEG describes this standard as a multimedia framework and provides for intellectual property management and protection.

Moreover, more recently than other standards above, MPEG has started following international standards; each of the standards holds multiple MPEG technologies for a way of application.[23][24][25][26][27] (For example, MPEG-A includes a number of technologies on multimedia application format.)

  • MPEG-A (2007): Multimedia application format (MPEG-A). (ISO/IEC 23000) (e.g., Purpose for multimedia application formats,[28] MPEG music player application format, MPEG photo player application format and others)
  • MPEG-B (2006): MPEG systems technologies. (ISO/IEC 23001) (e.g., Binary MPEG format for XML,[29] Fragment Request Units, Bitstream Syntax Description Language (BSDL) and others)
  • MPEG-C (2006): MPEG video technologies. (ISO/IEC 23002) (e.g., Accuracy requirements for implementation of integer-output 8x8 inverse discrete cosine transform[30] and others)
  • MPEG-D (2007): MPEG audio technologies. (ISO/IEC 23003) (e.g., MPEG Surround,[31] SAOC-Spatial Audio Object Coding and USAC-Unified Speech and Audio Coding)
  • MPEG-E (2007): Multimedia Middleware. (ISO/IEC 23004) (a.k.a. M3W) (e.g., Architecture,[32] Multimedia application programming interface (API), Component model and others)
  • MPEG-G (2019): Genomic Information Representation. (ISO/IEC 23092) Part 1 – Transport and Storage of Genomic Information; Part 2 – Coding of Genomic Information; Part 3 – APIs; Part 4 – Reference Software; Part 5 – Conformance;
  • Supplemental media technologies (2008). (ISO/IEC 29116) Part 1: Media streaming application format protocols will be revised in MPEG-M; Part 4 – MPEG extensible middleware (MXM) protocols.[33]
  • MPEG-V (2011): Media context and control. (ISO/IEC 23005) (a.k.a. Information exchange with Virtual Worlds)[34][35] (e.g., Avatar characteristics, Sensor information, Architecture[36][37] and others)
  • MPEG-M (2010): MPEG eXtensible Middleware (MXM). (ISO/IEC 23006)[38][39][40] (e.g., MXM architecture and technologies,[41] API, MPEG extensible middleware (MXM) protocols[42])
  • MPEG-U (2010): Rich media user interfaces. (ISO/IEC 23007)[43][44] (e.g., Widgets)
  • MPEG-H (2013): High Efficiency Coding and Media Delivery in Heterogeneous Environments. (ISO/IEC 23008) Part 1 – MPEG media transport; Part 2 – High Efficiency Video Coding; Part 3 – 3D Audio.
  • MPEG-DASH (2012): Information technology – Dynamic adaptive streaming over HTTP (DASH). (ISO/IEC 23009) Part 1 – Media presentation description and segment formats
  • MPEG-I (2020): Coded Representation of Immersive Media. [45] (ISO/IEC 23090) Part 3 - Versatile Video Coding, Part-2 OMAF (Omnidirectional Media Format).
MPEG groups of standards[24][25][26][46][47]
Acronym for a group of standards Title ISO/IEC standards First public release date (First edition) Description
MPEG-1 Coding of moving pictures and associated audio for digital storage media. Commonly limited to about 1.5 Mbit/s although specification is capable of much higher bit rates ISO/IEC 11172 1993
MPEG-2 Generic coding of moving pictures and associated audio information ISO/IEC 13818 1995
MPEG-3 abandoned, incorporated into MPEG-2
MPEG-4 Coding of audio-visual objects ISO/IEC 14496 1999
MPEG-7 Multimedia content description interface ISO/IEC 15938 2002
MPEG-21 Multimedia framework (MPEG-21) ISO/IEC 21000 2001
MPEG-A Multimedia application format (MPEG-A) ISO/IEC 23000 2007
MPEG-B MPEG systems technologies ISO/IEC 23001 2006
MPEG-C MPEG video technologies ISO/IEC 23002 2006
MPEG-D MPEG audio technologies ISO/IEC 23003 2007
MPEG-E Multimedia Middleware ISO/IEC 23004 2007
MPEG-G Genomic Information Representation ISO/IEC 23092 2019
(none) Supplemental media technologies ISO/IEC 29116 2008 will be revised in MPEG-M Part 4 – MPEG extensible middleware (MXM) protocols
MPEG-V Media context and control ISO/IEC 23005[36] 2011
MPEG-M MPEG extensible middleware (MXM) ISO/IEC 23006[41] 2010
MPEG-U Rich media user interfaces ISO/IEC 23007[43] 2010
MPEG-H High Efficiency Coding and Media Delivery in Heterogeneous Environments ISO/IEC 23008[48] 2013
MPEG-DASH Information technology – DASH ISO/IEC 23009 2012
MPEG-I Coded Representation of Immersive Media ISO/IEC 23090 TBD (2020)

Standardization process

A standard published by ISO/IEC is the last stage of a long process that starts with the proposal of new work within a committee. Here are some abbreviations used for marking a standard with its status:[3][49][50][51][52][53]

  • PWI – Preliminary Work Item
  • NP or NWIP – New Proposal / New Work Item Proposal (e.g., ISO/IEC NP 23007)
  • AWI – Approved new Work Item (e.g., ISO/IEC AWI 15444-14)
  • WD – Working Draft
  • CD – Committee Draft (e.g., ISO/IEC CD 23000-5)
  • FCD – Final Committee Draft (e.g., ISO/IEC FCD 23000-12)
  • DIS – Draft International Standard
  • FDIS – Final Draft International Standard
  • PRF – Proof of a new International Standard
  • IS – International Standard (e.g., ISO/IEC 13818-1:2007)
  • CD Amd / PDAmd (PDAM) – Committee Draft Amendment / Proposed Draft Amendment (e.g., ISO/IEC 13818-1:2007/CD Amd 6)
  • FPDAmd / DAM (DAmd) – Final Proposed Draft Amendment / Draft Amendment (e.g., ISO/IEC 14496-14:2003/FPDAmd 1)
  • FDAM (FDAmd) – Final Draft Amendment (e.g., ISO/IEC 13818-1:2007/FDAmd 4)
  • Amd – Amendment (e.g., ISO/IEC 13818-1:2007/Amd 1:2007)

Other abbreviations:

  • TR – Technical Report (e.g., ISO/IEC TR 13818-5:2005)
  • TS – Technical Specification
  • IWA – International Workshop Agreement
  • Cor – Technical Corrigendum (e.g., ISO/IEC 13818-1:2007/Cor 1:2008)

A proposal of work (New Proposal) is approved at Subcommittee and then at the Technical Committee level (SC29 and JTC1 respectively – in the case of MPEG). When the scope of new work is sufficiently clarified, MPEG usually makes open requests for proposals – known as "Call for proposals". The first document that is produced for audio and video coding standards is called a Verification Model (VM). In the case of MPEG-1 and MPEG-2 this was called Simulation and Test Model, respectively. When a sufficient confidence in the stability of the standard under development is reached, a Working Draft (WD) is produced. This is in the form of a standard but is kept internal to MPEG for revision. When a WD is sufficiently solid, becomes Committee Draft (CD) (usually at the planned time). It is then sent to National Bodies (NB) for ballot. The CD becomes Final Committee Draft (FCD) if the number of positive votes is above the quorum. After a review and comments issued by NBs, FCD is again submitted to NBs for the second ballot. If the FCD is approved, it becomes Final Draft International Standard (FDIS). ISO then holds a ballot with National Bodies, where no technical changes are allowed (yes/no ballot). If approved, the document becomes International Standard (IS).[3]

ISO/IEC Directives allow also the so-called "Fast-track procedure". In this procedure a document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies or as a final draft International Standard (FDIS) if the document was developed by an international standardizing body recognized by the ISO Council.[50]

See also

Notes

  1. ^ John Watkinson, The MPEG Handbook, p.1
  2. ^ Hans Geog Musmann, Genesis of the MP3 Audio Coding Standard (PDF), archived from the original (PDF) on 2012-01-17, retrieved 2011-07-26
  3. ^ a b c "About MPEG". chiariglione.org. Retrieved 2009-12-13.
  4. ^ "MPEG Meetings". chiariglione.org. Archived from the original on 2011-07-25. Retrieved 2009-12-13.
  5. ^ chiariglione.org (2009-09-06). "Riding the Media Bits, The Faultline". Archived from the original on 2011-07-25. Retrieved 2010-02-09.
  6. ^ a b c ISO, IEC (2009-11-05). "ISO/IEC JTC 1/SC 29, SC 29/WG 11 Structure (ISO/IEC JTC 1/SC 29/WG 11 – Coding of Moving Pictures and Audio)". Archived from the original on 2001-01-28. Retrieved 2009-11-07.
  7. ^ MPEG Committee. "MPEG – Moving Picture Experts Group". Archived from the original on 2008-01-10. Retrieved 2009-11-07.
  8. ^ ISO. "MPEG Standards – Coded representation of video and audio". Archived from the original on 2011-05-14. Retrieved 2009-11-07.
  9. ^ ISO. "JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information". Retrieved 2009-11-11.
  10. ^ "ITU-T and ISO/IEC to produce next generation video coding standard". 2002-02-08. Retrieved 2010-03-08.
  11. ^ ITU-T. "Joint Video Team". Retrieved 2010-03-07.
  12. ^ ITU-T (January 2010). "Final joint call for proposals for next-generation video coding standardization". Retrieved 2010-03-07.
  13. ^ ITU-T. "Joint Collaborative Team on Video Coding – JCT-VC". Retrieved 2010-03-07.
  14. ^ Understanding MPEG-4, p.78
  15. ^ Cliff Wootton. A Practical Guide to Video and Audio Compression. p. 665.
  16. ^ a b c d The MPEG Handbook, p.4
  17. ^ Salomon, David (2007). "Video Compression". Data compression: the complete reference (4 ed.). Springer. p. 676. ISBN 978-1-84628-602-5.
  18. ^ Understanding MPEG-4, p.83
  19. ^ "MPEG-J White Paper". July 2005. Retrieved 2010-04-11.
  20. ^ "MPEG-J GFX white paper". July 2005. Retrieved 2010-04-11.
  21. ^ ISO. "ISO/IEC 14496-21:2006 – Information technology – Coding of audio-visual objects – Part 21: MPEG-J Graphics Framework eXtensions (GFX)". ISO. Retrieved 2009-10-30.
  22. ^ Fórum SBTVD. "O que é o ISDB-TB". Retrieved 2012-06-02.
  23. ^ "MPEG - The Moving Picture Experts Group website".
  24. ^ a b MPEG. "About MPEG – Achievements". chiariglione.org. Archived from the original on 2008-07-08. Retrieved 2009-10-31.
  25. ^ a b MPEG. "Terms of Reference". chiariglione.org. Archived from the original on 2010-02-21. Retrieved 2009-10-31.
  26. ^ a b MPEG. "MPEG standards – Full list of standards developed or under development". chiariglione.org. Archived from the original on 2010-04-20. Retrieved 2009-10-31.
  27. ^ MPEG. "MPEG technologies". chiariglione.org. Archived from the original on 2010-02-21. Retrieved 2009-10-31.
  28. ^ ISO. "ISO/IEC TR 23000-1:2007 – Information technology – Multimedia application format (MPEG-A) – Part 1: Purpose for multimedia application formats". Retrieved 2009-10-31.
  29. ^ ISO. "ISO/IEC 23001-1:2006 – Information technology – MPEG systems technologies – Part 1: Binary MPEG format for XML". Retrieved 2009-10-31.
  30. ^ ISO. "ISO/IEC 23002-1:2006 – Information technology – MPEG video technologies – Part 1: Accuracy requirements for implementation of integer-output 8x8 inverse discrete cosine transform". Retrieved 2009-10-31.
  31. ^ ISO. "ISO/IEC 23003-1:2007 – Information technology – MPEG audio technologies – Part 1: MPEG Surround". Retrieved 2009-10-31.
  32. ^ ISO. "ISO/IEC 23004-1:2007 – Information technology – Multimedia Middleware – Part 1: Architecture". Retrieved 2009-10-31.
  33. ^ ISO. "ISO/IEC 29116-1:2008 – Information technology – Supplemental media technologies – Part 1: Media streaming application format protocols". Retrieved 2009-11-07.
  34. ^ ISO/IEC JTC 1/SC 29 (2009-10-30). "MPEG-V (Media context and control)". Archived from the original on 2013-12-31. Retrieved 2009-11-01.
  35. ^ MPEG. "Working documents – MPEG-V (Information Exchange with Virtual Worlds)". chiariglione.org. Archived from the original on 2010-02-21. Retrieved 2009-11-01.
  36. ^ a b ISO. "ISO/IEC FDIS 23005-1 – Information technology – Media context and control – Part 1: Architecture". Retrieved 2011-01-28.
  37. ^ Christian Timmerer; Jean Gelissen; Markus Waltl & Hermann Hellwagner, Interfacing with Virtual Worlds (PDF), retrieved 2009-12-29
  38. ^ ISO/IEC JTC 1/SC 29 (2009-10-30). "MPEG-M (MPEG extensible middleware (MXM))". Archived from the original on 2013-12-31. Retrieved 2009-11-01.
  39. ^ MPEG. "MPEG Extensible Middleware (MXM)". Retrieved 2009-11-04.
  40. ^ ISO/IEC JTC 1/SC 29/WG 11 (October 2008). "MPEG eXtensible Middleware Vision". ISO. Retrieved 2009-11-05.
  41. ^ a b ISO. "ISO/IEC FCD 23006-1 – Information technology – MPEG extensible middleware (MXM) – Part 1: MXM architecture and technologies". Retrieved 2009-10-31.
  42. ^ ISO. "ISO/IEC 23006-4 – Information technology – MPEG extensible middleware (MXM) – Part 4: MPEG extensible middleware (MXM) protocols". Retrieved 2011-01-28.
  43. ^ a b ISO. "ISO/IEC 23007-1 – Information technology – Rich media user interfaces – Part 1: Widgets". Retrieved 2011-01-28.
  44. ^ ISO/IEC JTC 1/SC 29 (2009-10-30). "MPEG-U (Rich media user interfaces)". Archived from the original on 2013-12-31. Retrieved 2009-11-01.
  45. ^ https://mpeg.chiariglione.org/standards/mpeg-i
  46. ^ ISO/IEC JTC 1/SC 29 (2009-11-05). "Programme of Work (Allocated to SC 29/WG 11)". Archived from the original on 2013-12-31. Retrieved 2009-11-07.
  47. ^ ISO. "JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information". Retrieved 2009-11-07.
  48. ^ "ISO/IEC 23008-2:2013". International Organization for Standardization. 2013-11-25. Retrieved 2013-11-29.
  49. ^ ISO. "International harmonized stage codes". Retrieved 2009-12-31.
  50. ^ a b ISO. "Stages of the development of International Standards". Retrieved 2009-12-31.
  51. ^ "The ISO27k FAQ – ISO/IEC acronyms and committees". IsecT Ltd. Retrieved 2009-12-31.
  52. ^ ISO (2007). "ISO/IEC Directives Supplement – Procedures specific to ISO" (PDF). Retrieved 2009-12-31.
  53. ^ ISO (2007). "List of abbreviations used throughout ISO Online". Retrieved 2009-12-31.

External links

This page was last edited on 26 March 2019, at 18:25
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.