To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Digital image processing

From Wikipedia, the free encyclopedia

In computer science, digital image processing is the use of a digital computer to process digital images through an algorithm.[1][2] As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics (especially the creation and improvement of discrete mathematics theory); third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.

YouTube Encyclopedic

  • 1/5
    Views:
    37 217
    3 440
    66 682
    18 065
    59 051
  • ✪ What Is Digital Image Processing - Introduction to Digital Image Processing
  • ✪ Digital Image Processing INTRODUCTION | GeeksforGeeks
  • ✪ Digital image processing: p006 - Image formation - Sampling Quantization
  • ✪ Introduction to Digital Image Processing -1 (Hindi Urdu)
  • ✪ Digital image processing: p016 Histogram equalization

Transcription

Click the Bell Icon to get the Latest Videos from Ekeeda all to this video with this video we are going to start with the new subject the subject is titled digital image processing as of now we are all familiar with the wide application areas of digital image processing here the first chapter is focused on 2 introduction of digital image processing and the very first topic that we are going to deal with this video that is what is a digital image processing as digital image processing the title is having three terms digital image and processing in this video we are going to see first of all what exactly the image mean the types of image we are going to see as analog image and digital image so the first two terms a digital image we will be getting familiarized with and further we shall see digital image processing now we know the various applications of digital image processing why digital image processing has been so popular so the advantages offered by digital image processing we will be seeing in this particular video further whatever the other areas related to digital image processing are that is image analysis sometimes called as the image understanding along with the computer vision we will be discussing so we shall be deciding the scope of digital image processing the exact starting and ending of digital image processing is to be addressed along with a useful paradigm into the world of images also we shall see the various course contains that we are going to follow into this entire subject digital image processing so let us start with the first topic that is what is digital image processing so let us begin with the topic the topic is titled what is a digital image processing here those viewers who are very beginners with the subject digital image processing I must make you familiar that we should first of all how the image that generally be obtained with the help of camera in our subject we call this process as image acquisition and the camera is actually the image sensor with which a picture generally what we talk is our image so that image should be a digital image that image contains information so the extracting information from the digital image and processing it further for the specific application we can say it is the extent of digital image processing so this is a try to show you a prototype of what exactly digital image processing is his the title is digital image processing moving further into the topic we first of all take a simple picture here this picture contains a cartoon character moogly here a dancing girl a call for elephant black bear black panther we have small hurts here the forestry the river a lot of things along with the fruits are there into this image so lot of storytelling things can be done with the help of this particular picture what we call image also hence anonymous person has said that one picture is worth more than 10,000 words here so with this understanding we begin with the topic what is digital image processing so let us first of all see what is the motivation behind the development and the popularity of digital image processing it is mainly because of the two application areas or the two causes we can say that are listed here so here we can say that the motivation behind digital image processing is due to two principal application areas the first one is improvement of pictorial information for human interpretation and the second one is thing of image data for storage transmission and representation of autonomous machine perception so here these two points are very much important whatever the information we get from a certain image that is address into the first point we always want more information to be obtained so from the image we want the pictorial information to be imprudent for our interpretation human interpretation we can say and when the task of storage of that particular information to reproduce it later on or to transmit that information from one place to another place and representation onto the displays of the modern computers or the modern machines we can say the second point is said that processing of email data for storage transmission and representation of autonomous machine perception so these are the two causes because of which the high development into the digital image processing has taken up further we shall see in the three terms for the subject that is digital image processing and here out of these three terms we first of all begin with the middle term that is what exactly the image is then we shall see what exactly the digital image is and then lastly we can see the digital image processing so very first of all the question is what is image here so regarding the image we can say that it is the projection of 3d scene into the 2d plane whatever we see with our bare eyes that is actually a three-dimensional view that we can model with the help of three axis X Y Z that we have to do the representation onto a 2d plane or a plane of paper we can say so here the three axis XYZ the 3d scene we have to model in terms of only the two axis that exactly means the image so here the example with this with the help of this picture we can see so this is a tree into the three-dimensional view or three-dimensional scene we can see that we have captured an image into the two-dimensional plane here so this is the general interpretation of what exactly the image is now talking further we can have a simple model here as the natural source of capturing image is our eye so here we can have into the scenario the two objects into this diagram you can see that the two objects are of same dimensions but they are located apart from each other so the first object is near to the eye the second object is far from the eye so whatever the image that generally we say is formed onto the retina that will be a 2-dimensional image for our interpretation and here though the two objects are of same and I mentions the object that is near to I that is having the higher dimensions into the image and that is far away from the eye that is having the smaller dimensions into the two-dimensional view here so the 3d scene that we are going to model into the 2d plane that exactly is the image here further we can say that mathematically talking we can have the definition a two-dimensional function f of X y where x and y are the spatial coordinates and F at any pair of coordinates of X Y is called intensity or a gray level of the image at that point so here we can take the two axis x and y so x and y will give the spatial coordinates the location of the elements of this particular image so defining the exact values of x and y at that particular location there is third parameter that is yep we shall give us the intensity of the gray level of the image at that particular point so depending on the XY and F values we can have the classification of image into the two types that is analog image and what is our important one that is digital image so let us first of all talk about what exactly the analog image is then later on we shall see digital image so for analog image first of all the accepted definition for simply the image is a two-dimensional function f of X Y where x and y are the spatial coordinates and F at any pair of coordinates XY is called the intensity or gray level of the image at that particular point so the additional things for the image to be analog image is that when these the de presentations has the continuous range of values for f x and y representing the position by X Y and the intensity by the parameter F so that time we can say that the image is analog image the example of analogue image can be given as the image produce onto the screen of CRT monitor the storage requirement for such analog image is having the high memory further we can talk about the digital image which is of more concern to this particular subject digital image processing so again we have the accepted definition of image into the mathematical form it is a function f of X Y X Y being the spatial coordinates and F is the intensity or the gray level of the image so when these three parameters are discrete into the nature all X Y and F in other words we can say are all the finite quantities that time we can say that the image is a digital image so just now we have seen the two types of images basically that is analog image and that are digital image so generally in our world we have the analog everything but for the storage purpose for the transmission purpose we convert it to the digital domain with the help of digital computers so how to obtain a digital image from the analog image so for that purpose this simple procedure is to be obtained from the analog image we have to do the sampling the sampling will have the discretization in terms of x and y after sampling we have quantization which will have the discretization onto the f parameter and after sampling and quantization we obtain the form of image to be the digital image so this is exactly the digital image is so more information regarding digital image we can say that it contains finite number of elements as all the three parameters X Y and F are all discretized here so these number of elements given with the discrete values of x and y we can say each is having a particular location and a certain value of F here further such elements we can call the picture elements also called as image elements pills or pixels pixel is the most widely term that is used for representation of such elements into the digital image so what are the advantages of having such a digital image the advantage is that past processing as the memory requirement is less as compared to the analog image it is cost effective it has effective storage effective transmission from one place to another along with the scope for versatile image manipulation image manipulations we can do with the digital image so what are the disadvantages the disadvantages is that high memory is required if you want the good quality of images and hence requires the past processors here further if you like to compare the analog image and the digital image here we have a sample picture here the first image is having a very precise curvatures the edges we can see hence the storage of this particular image requires higher memory this image can be said to be the analog here all the X Y and F value are continuous here whereas into this second part that is said to be the digital one the things of X Y and F are discretized here they are not the continuous range of values for X Y and yep that is the basic difference between the analog and the digital image now further we shall see exactly the digital image processing is up till now we have seen what exactly the digital image mean so we shall see the processing so into a single line we can say that the analysis and manipulation of the digitized image especially in order to improve its quality is said to be the digital image processing in other words we can say that processing of digital images my main swap digital computer for specific applications is actually the digital digital image processing here now what are the advantages offered by the digital image processing the advantages offered are as we know the human beings all of us are limited to the visual band of electromagnetic spectrum but as we talk about the storage transmission with the help of machines exactly we can see or the digital computers so the imaging machines cover almost the entire spectrum ranging from gamma used to the radius as we are confined to only the visual band the machines can cover the entire electromagnetic spectrum so further those operating onto the images generated by sources that humans are not capable to since we can have into the digital image processing so these include ultrasound electron microscopy and computer generated image hence I can conclude that the advantage of using digital image processing is to have more information that only the human beings directly are not related to so after advantages of digital image processing let us address what is the exact scoop on digital image processing the very first point we can say here that there is no general agreement where the image processing exactly stops and the other related areas such as the image analysis the computer vision start here along with this the distinction some time for digital image processing is made that it is defined as a discipline in which of both the input and output of the process are the images here but this definition is sometimes limiting and somewhat artificial boundary for that purpose we have this simple example here if the if the task of computing the average intensity of an image which is exactly a single number so according to this definition would not be considered as image processing operation but we know that it is a task that is associated into the digital image processing only so there is no general agreement and no general definition where we can exactly say the scope of digital image processing is defined here so further we can say that there are the fields like computer vision which use the computers to emulate the human region including the learning and being made able to make the inferences and take the actions based on to the visual inputs here thus a branch of artificial intelligence AI we can abbreviate whose objective is to emulate the human intelligence is here it is the computer region and hence the area of image analysis along with these understandings we can say image understanding for the image analysis also it is in between the image processing and computer vision so as the digital image processing the start and end lines are not clear-cut we have a paradigm which is useful in the world of images so useful paradigm in the world of images we can see that consideration of the three types of computerised processes we can say low level processes mid level processes and finally the high level processes talking about the first process that is the low level process we can say that this involves primitive operations such as image pre-processing to reduce the noise to have the contrast enhancement and to have image sharpening so further this low level process is actually characterized by the fact that both the inputs and outputs are the images so this we can say is exactly the low level process further talking about the mid-level process we can say that it involved the tasks such as image segmentation that we are going to see into the details into the next chapters definitely the description of those objects to reduce them to a form suitable for computer processing and classification of individual objects so the mid-level process is characterized by the fact that its inputs are generally the images but the outputs are actually the attributes extracted from these images the attributes are in the form of a J's contours and the identity of the individual objects here finally the last one that is the third that is high level process according to this useful paradigm it involves making sense of the in symbol of the recognized objects as in the image analysis and at the far end performing the functions normally associated with the pigeon hence by the title useful paradeen in the world of images we can address right from the digital image processing to image analysis and finally the computer region so in this course into the digital image processing we are going to address first of all the first chapter introduction to digital image processing the every detail of digital image and processing along with the application fundamental steps the components that we are going to see into this first topic that is the introduction to digital image processing this particular chapter will followed by the second chapter that is digital image fundamentals after that we have the different image transforms which will be having the mathematical interpretation associated with this chapter with the subject digital image processing next to that we have the image enhancement the improvement of information in the spatial domain also the image enhancement into the frequency domain we have image restoration here the image enhancement into the spatial and frequency domain and image restoration all are having the same note the different procedures different types to work on to the improvement of material information after image restoration we have the color image processing right from the first chapter to this particular chapter we have the binary images and the grayscale images address for such operations and here we start with exactly the color image processing it will be followed by the very popular and the potential tools that is wavelets and multi-resolution processing here the resolution is at multiple level and it is having a very potential tool into the applications of the digital image processing cordell we have the image compression where we are going to minimize the memory requirement to store and to transmit the image from one place to another place along with some morphological image processing after image compression and image segmentation we have further we have representation and a description of image and finally we have the object recognition so starting with introduction to exactly the digital image and further digital image processing we end up onto the object recognition so the objects into the digital image are going to be recognized how what are the ways to recognize them that we will definitely see so these are actually the chapters that we are going to follow into the entire subject along with the theoretical understandings and certain these Linnaeus problems to practice on we have the MATLAB code e to give you the exact demonstration of the image enhancement image restoration various processing of the images the image transform we shall be working with along with these other topics so this is the course content for the subject digital image processing now the summary of this particular review we can make we have seen what is image what is digital image which is no more concern to us further what is digital image processing I hope you are now understood well further advantages of digital image processing we have seen the scope of digital image processing and this hope we have address with a useful parody into the world of images finally we have seen just now the course contents here so this was the summary into the next lecture we are going to address the new topic from the chapter number one the topic name is origins of digital image processing so for getting such more copies more information onto the subject digital image processing you can subscribe to equal a channel thank you you

Contents

History

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s, at Bell Laboratories, the Jet Propulsion Laboratory, Massachusetts Institute of Technology, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement.[3] The purpose of early image processing was to improve the quality of the image. It was aimed for human beings to improve the visual effect of people. In image processing, the input is a low-quality image, and the output is an image with improved quality. Common image processing include image enhancement, restoration, encoding, and compression. The first successful application was the American Jet Propulsion Laboratory (JPL). They used image processing techniques such as geometric correction, gradation transformation, noise removal, etc. on the thousands of lunar photos sent back by the Space Detector Ranger 7 in 1964, taking into account the position of the sun and the environment of the moon. The impact of the successful mapping of the moon's surface map by the computer has been a huge success. Later, more complex image processing was performed on the nearly 100,000 photos sent back by the spacecraft, so that the topographic map, color map and panoramic mosaic of the moon were obtained, which achieved extraordinary results and laid a solid foundation for human landing on the moon.[4]

The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. This led to images being processed in real-time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest.

Image sensors

The basis for modern image sensors is metal-oxide-semiconductor (MOS) technology,[5] which originates from the invention of the MOSFET (MOS field-effect transistor) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[6] This led to the development of digital semiconductor image sensors, including the charge-coupled device (CCD) and later the CMOS sensor.[5]

The charge-coupled device was invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969.[7] While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straighforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next.[5] The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.[8]

The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels.[9][10] The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985.[11] The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993.[12] By 2007, sales of CMOS sensors had surpassed CCD sensors.[13]

Image compression

An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972.[14] DCT compression became the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992.[15] JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet.[16] Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation of digital images and digital photos,[17] with several billion JPEG images produced every day as of 2015.[18]

Digital signal processor (DSP)

Electronic signal processing was revolutionized by the wide adoption of MOS technology in the 1970s.[19] MOS integrated circuit technology was the basis for the first single-chip microprocessors and microcontrollers in the early 1970s,[20] and then the first single-chip digital signal processor (DSP) chips in the late 1970s.[21][22] DSP chips have since been widely used in digital image processing.[21]

The discrete cosine transform (DCT) image compression algorithm has been widely implemented in DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely used for encoding, decoding, video coding, audio coding, multiplexing, control signals, signaling, analog-to-digital conversion, formatting luminance and color differences, and color formats such as YUV444 and YUV411. DCTs are also used for encoding operations such as motion estimation, motion compensation, inter-frame prediction, quantization, perceptual weighting, entropy encoding, variable encoding, and motion vectors, and decoding operations such as the inverse operation between different color formats (YIQ, YUV and RGB) for display purposes. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips.[23]

Medical imaging

In 1972, the engineer from British company EMI Housfield invented the X-ray computed tomography device for head diagnosis, which is what we usually called CT(Computer Tomography). The CT nucleus method is based on the projection of the human head section and is processed by computer to reconstruct the cross-sectional image, which is called image reconstruction. In 1975, EMI successfully developed a CT device for the whole body, which obtained a clear tomographic image of various parts of the human body. In 1979, this diagnostic technique won the Nobel Prize.[4] Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994.[24]

Tasks

Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means.

In particular, digital image processing is the only practical technology for[citation needed]:

Some techniques which are used in digital image processing include:

Digital image transformations

Filtering

Digital filters are used to blur and sharpen digital images. Filtering can be performed by:

  • convolution with specifically designed kernels (filter array) in the spatial domain[25]
  • masking specific frequency regions in the frequency (Fourier) domain

The following examples show both methods:[26]

Filter type Kernel or mask Example
Original Image
Affine Transformation Original Checkerboard.jpg
Spatial Lowpass
Spatial Mean Filter Checkerboard.png
Spatial Highpass
Spatial Laplacian Filter Checkerboard.png
Fourier Representation Pseudo-code:

image = checkerboard

F = Fourier Transform of image

Show Image: log(1+Absolute Value(F))

Fourier Space Checkerboard.png
Fourier Lowpass
Lowpass Butterworth Checkerboard.png
Lowpass FFT Filtered checkerboard.png
Fourier Highpass
Highpass Butterworth Checkerboard.png
Highpass FFT Filtered checkerboard.png

Image padding in Fourier domain filtering

Images are typically padded before being transformed to the Fourier space, the highpass filtered images below illustrate the consequences of different padding techniques:

Zero padded Repeated edge padded
Highpass FFT Filtered checkerboard.png
Highpass FFT Replicate.png

Notice that the highpass filter shows extra edges when zero padded compared to the repeated edge padding.

Filtering code examples

MATLAB example for spatial domain highpass filtering.

img=checkerboard(20);                           % generate checkerboard
% **************************  SPATIAL DOMAIN  ***************************
klaplace=[0 -1 0; -1 5 -1;  0 -1 0];             % Laplacian filter kernel
X=conv2(img,klaplace);                          % convolve test img with
                                                % 3x3 Laplacian kernel
figure()
imshow(X,[])                                    % show Laplacian filtered 
title('Laplacian Edge Detection')

Affine transformations

Affine transformations enable basic image transformations including scale, rotate, translate, mirror and shear as is shown in the following examples:[26]

Transformation Name Affine Matrix Example
Identity
Checkerboard identity.svg
Reflection
Checkerboard reflection.svg
Scale
Checkerboard scale.svg
Rotate
Checkerboard rotate.svg
where θ = π/6 =30°
Shear
Checkerboard shear.svg

To apply the affine matrix to an image, the image is converted to matrix in which each entry corresponds to the pixel intensity at that location. Then each pixel's location can be represented as a vector indicating the coordinates of that pixel in the image, [x, y], where x and y are the row and column of a pixel in the image matrix. This allows the coordinate to be multiplied by an affine-transformation matrix, which gives the position that the pixel value will be copied to in the output image.

However, to allow transformations that require translation transformations, 3 dimensional homogeneous coordinates are needed. The third dimension is usually set to a non-zero constant, usually 1, so that the new coordinate is [x, y, 1]. This allows the coordinate vector to be multiplied by a 3 by 3 matrix, enabling translation shifts. So the third dimension, which is the constant 1, allows translation.

Because matrix multiplication is associative, multiple affine transformations can be combined into a single affine transformation by multiplying the matrix of each individual transformation in the order that the transformations are done. This results in a single matrix that, when applied to a point vector, gives the same result as all the individual transformations performed on the vector [x, y, 1] in sequence. Thus a sequence of affine transformation matrices can be reduced to a single affine transformation matrix.

For example, 2 dimensional coordinates only allow rotation about the origin (0, 0). But 3 dimensional homogeneous coordinates can be used to first translate any point to (0, 0), then perform the rotation, and lastly translate the origin (0, 0) back to the original point (the opposite of the first translation). These 3 affine transformations can be combined into a single matrix, thus allowing rotation around any point in the image.[27]

Applications

Digital camera images

Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from their image sensor into a color-corrected image in a standard image file format.

Film

Westworld (1973) was the first feature film to use the digital image processing to pixellate photography to simulate an android's point of view.[28]

See also

References

  1. ^ Chakravorty, Pragnan (2018). "What is a Signal? [Lecture Notes]". IEEE Signal Processing Magazine. 35 (5): 175–177. Bibcode:2018ISPM...35..175C. doi:10.1109/MSP.2018.2832195.
  2. ^ Gonzalez, Rafael (2018). Digital image processing. New York, NY: Pearson. ISBN 978-0-13-335672-4. OCLC 966609831.
  3. ^ Azriel Rosenfeld, Picture Processing by Computer, New York: Academic Press, 1969
  4. ^ a b Gonzalez, Rafael C. (2008). Digital image processing. Woods, Richard E. (Richard Eugene), 1954- (3rd ed.). Upper Saddle River, N.J.: Prentice Hall. pp. 23–28. ISBN 9780131687288. OCLC 137312858.
  5. ^ a b c Williams, J. B. (2017). The Electronics Revolution: Inventing the Future. Springer. pp. 245–8. ISBN 9783319490885.
  6. ^ "1960: Metal Oxide Semiconductor (MOS) Transistor Demonstrated". The Silicon Engine. Computer History Museum. Archived from the original on 3 October 2019. Retrieved 31 August 2019.
  7. ^ James R. Janesick (2001). Scientific charge-coupled devices. SPIE Press. pp. 3–4. ISBN 978-0-8194-3698-6.
  8. ^ Boyle, William S; Smith, George E. (1970). "Charge Coupled Semiconductor Devices". Bell Syst. Tech. J. 49 (4): 587–593. doi:10.1002/j.1538-7305.1970.tb01790.x.
  9. ^ Fossum, Eric R. (12 July 1993). "Active pixel sensors: Are CCDS dinosaurs?". In Blouke, Morley M. (ed.). Charge-Coupled Devices and Solid State Optical Sensors III. Proceedings of the SPIE. 1900. pp. 2–14. Bibcode:1993SPIE.1900....2F. CiteSeerX 10.1.1.408.6558. doi:10.1117/12.148585.
  10. ^ Fossum, Eric R. (2007). "Active Pixel Sensors" (PDF). Semantic Scholar. Archived (PDF) from the original on 9 March 2019. Retrieved 8 October 2019.
  11. ^ Matsumoto, Kazuya; et al. (1985). "A new MOS phototransistor operating in a non-destructive readout mode". Japanese Journal of Applied Physics. 24 (5A): L323. Bibcode:1985JaJAP..24L.323M. doi:10.1143/JJAP.24.L323.
  12. ^ Fossum, Eric R.; Hondongwa, D. B. (2014). "A Review of the Pinned Photodiode for CCD and CMOS Image Sensors". IEEE Journal of the Electron Devices Society. 2 (3): 33–43. doi:10.1109/JEDS.2014.2306412.
  13. ^ "CMOS Image Sensor Sales Stay on Record-Breaking Pace". IC Insights. 8 May 2018. Archived from the original on 21 June 2019. Retrieved 6 October 2019.
  14. ^ Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing. 1 (1): 4–5. doi:10.1016/1051-2004(91)90086-Z. Archived from the original on 10 June 2016. Retrieved 10 October 2019.
  15. ^ "T.81 – DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES – REQUIREMENTS AND GUIDELINES" (PDF). CCITT. September 1992. Archived (PDF) from the original on 17 July 2019. Retrieved 12 July 2019.
  16. ^ "The JPEG image format explained". BT.com. BT Group. 31 May 2018. Archived from the original on 5 August 2019. Retrieved 5 August 2019.
  17. ^ "What Is a JPEG? The Invisible Object You See Every Day". The Atlantic. 24 September 2013. Archived from the original on 9 October 2019. Retrieved 13 September 2019.
  18. ^ Baraniuk, Chris (15 October 2015). "Copy protections could come to JPEGs". BBC News. BBC. Archived from the original on 9 October 2019. Retrieved 13 September 2019.
  19. ^ Grant, Duncan Andrew; Gowar, John (1989). Power MOSFETS: theory and applications. Wiley. p. 1. ISBN 9780471828679. The metal-oxide-semiconductor field-effect transistor (MOSFET) is the most commonly used active device in the very large-scale integration of digital integrated circuits (VLSI). During the 1970s these components revolutionized electronic signal processing, control systems and computers.
  20. ^ Shirriff, Ken (30 August 2016). "The Surprising Story of the First Microprocessors". IEEE Spectrum. Institute of Electrical and Electronics Engineers. Archived from the original on 13 October 2019. Retrieved 13 October 2019.
  21. ^ a b "1979: Single Chip Digital Signal Processor Introduced". The Silicon Engine. Computer History Museum. Archived from the original on 3 October 2019. Retrieved 14 October 2019.
  22. ^ Taranovich, Steve (27 August 2012). "30 years of DSP: From a child's toy to 4G and beyond". EDN. Archived from the original on 14 October 2019. Retrieved 14 October 2019.
  23. ^ Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview with K.R. Rao" (PDF). Reprints from the Early Days of Information Sciences. 60. Archived (PDF) from the original on 13 October 2019. Retrieved 13 October 2019.
  24. ^ "Space Technology Hall of Fame:Inducted Technologies/1994". Space Foundation. 1994. Archived from the original on 4 July 2011. Retrieved 7 January 2010.
  25. ^ Zhang, M. Z.; Livingston, A. R.; Asari, V. K. (2008). "A High Performance Architecture for Implementation of 2-D Convolution with Quadrant Symmetric Kernels". International Journal of Computers and Applications. 30 (4): 298–308. doi:10.1080/1206212x.2008.11441909.
  26. ^ a b Gonzalez, Rafael (2008). Digital Image Processing, 3rd. Pearson Hall. ISBN 9780131687288.
  27. ^ House, Keyser (6 December 2016). Affine Transformations (PDF). Clemson. Foundations of Physically Based Modeling & Animation. A K Peters/CRC Press. ISBN 9781482234602. Archived (PDF) from the original on 30 August 2017. Retrieved 26 March 2019.
  28. ^ A Brief, Early History of Computer Graphics in Film Archived 17 July 2012 at the Wayback Machine, Larry Yaeger, 16 August 2002 (last update), retrieved 24 March 2010

Further reading

  • R. Fisher; K Dawson-Howe; A. Fitzgibbon; C. Robertson; E. Trucco (2005). Dictionary of Computer Vision and Image Processing. John Wiley. ISBN 978-0-470-01526-1.
  • Rafael C. Gonzalez; Richard E. Woods; Steven L. Eddins (2004). Digital Image Processing using MATLAB. Pearson Education. ISBN 978-81-7758-898-9.
  • Milan Sonka; Vaclav Hlavac; Roger Boyle (1999). Image Processing, Analysis, and Machine Vision. PWS Publishing. ISBN 978-0-534-95393-5.
  • Rafael C. Gonzalez (2008). Digital Image Processing. Prentice Hall. ISBN 9780131687288

External links

This page was last edited on 14 January 2020, at 16:48
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.