To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.

Graphical user interface

From Wikipedia, the free encyclopedia

 the interim Dynabook GUI (Smalltalk-76 running on Alto)
the interim Dynabook GUI (Smalltalk-76 running on Alto)

The graphical user interface (GUI /ɡ/), is a type of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs),[1][2][3] which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements.[4] Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up display (HUD)[5] is preferred), or not including flat screens, like volumetric displays[6] because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

YouTube Encyclopedic

  • 1/5
    95 177
    10 939
    7 535
    8 915
    209 950
  • Graphical User Interfaces: Crash Course Computer Science #26
  • What is GUI (Hindi)
  • GUI vs CUI - Graphical User Interface and Character User Interface
  • The history of the graphic user interface
  • Xerox Star User Interface (1982) 1 of 2


Hi, I’m Carrie Anne, and welcome to CrashCourse Computer Science! We ended last episode with the 1984 release of Apple’s Macintosh personal computer. It was the first computer a regular person could buy with a graphical user interface and a mouse to interact with it. This was a radical evolution from the command line interfaces found on all other personal computers of the era. Instead of having to remember... or guess... the right commands to type in, a graphical user interface shows you what functions are possible. You just have to look around the screen for what you want to do. It’s a “point and click” interface. All of a sudden, computers were much more intuitive. Anybody, not just hobbyists or computer scientists, could figure things out all by themselves. INTRO The Macintosh is credited with taking Graphical User Interfaces, or GUIs, mainstream, but in reality they were the result of many decades of research. In previous episodes, we discussed some early interactive graphical applications, like Sketchpad and Spacewar!, both made in 1962. But these were one-off programs, and not whole integrated computing experiences. Arguably, the true forefather of modern GUIs was Douglas Engelbart. Let’s go to the thought bubble! During World War 2, while Engelbart was stationed in the Philippines as a radar operator, he read Vannevar Bush’s article on the Memex. These ideas inspired him, and when his Navy service ended, he returned to school, completing a Ph.D. in 1955 at U.C. Berkeley. Heavily involved in the emerging computing scene, he collected his thoughts in a seminal 1962 report, titled: “Augmenting Human Intellect”. Engelbart “believed that the complexity of the problems facing mankind [was] growing faster than our ability to solve them. Therefore, finding ways to augment our intellect would seem to be both a necessary and a desirable goal." He saw that computers could be useful beyond just automation, and be essential interactive tools for future knowledge workers to tackle complex problems. Further inspired by Ivan Sutherland’s recently demonstrated Sketchpad, Engelbart set out to make his vision a reality, recruiting a team to build the oN-Line System. He recognized that a keyboard alone was insufficient for the type of applications he was hoping to enable. In his words: "We envisioned problem-solvers using computer-aided working stations to augment their efforts. They required the ability to interact with information displays using some sort of device to move [a cursor] around the screen." And in 1964, working with colleague Bill English, he created the very first computer mouse. The wire came from the bottom of the device and looked very much like a rodent and the nickname stuck. Thanks thought bubble! In 1968, Engelbart demonstrated his whole system at the Fall Joint Computer Conference, in what’s often referred to as “the mother of all demos”. The demo was 90 minutes long and demonstrated many features of modern computing: bitmapped graphics, video conferencing, word processing, and collaborative real-time editing of documents. There were also precursors to modern GUIs, like the mouse and multiple windows – although they couldn’t overlap. It was way ahead of its time, and like many products with that label, it ultimately failed, at least commercially. But its influence on computer researchers of the day was huge. Engelbart was recognized for this watershed moment in computing with a Turing Award in 1997. Federal funding started to reduce in the early 1970s, which we discussed two episodes ago. At that point, many of Engelbart’s team, including Bill English, left and went to Xerox's newly formed Palo Alto Research Centre, more commonly known as Xerox PARC. It was here that the first true GUI computer was developed: the Xerox Alto, finished in 1973. For the computer to be easy to use, it needed more than just fancy graphics. It needed to be built around a concept that people were already familiar with, so they could immediately recognize how to use the interface with little or no training. Xerox’s answer was to treat the 2D screen like the top of a desk… or desktop. Just like how you can have many papers laid out on a desk, a user could have several computer programs open at once. Each was contained in their own frame, which offered a view onto the application – called a window. Also like papers on a desk, these windows could overlap, blocking the items behind them. And there were desk accessories, like a calculator and clock, that the user could place on the screen and move around. It wasn’t an exact copy of a desktop though. Instead, it was a metaphor of a desktop. For this reason, surprisingly, it’s called the Desktop Metaphor. There are many ways to design an interface like this, but the Alto team did it with windows, icons, menus, and a pointer – what’s called a WIMP interface. It’s what most desktop GUIs use today. It also offered a basic set of widgets, reusable graphical building blocks...things like buttons, checkboxes, sliders, and tabs which were also drawn from real world objects to make them familiar. GUI applications are constructed from these widgets, so let’s try coding a simple example using this new programming paradigm. First, we have to tell the operating system that we need a new window to be created for our app. We do this through a GUI API. We need to specify the name of the window and also its size. Let’s say 500 by 500 pixels. Now, let’s add some widgets – a text box and a button. These require a few parameters to create. First, we need to specify what window they should appear in, because apps can have multiple windows. We also need to specify the default text, the X and Y location in the window, and a width and height. Ok, so now we’ve got something that looks like a GUI app, but has no functionality. If you click the “roll” button, nothing happens. In previous examples we’ve discussed, the code pretty much executes from top to bottom. GUIs, on the other hand, use what’s called event-driven programming; code can fire at any btime, and in different orders, in response to events. In this case, it’s user driven events, like clicking on a button, selecting a menu item, or scrolling a window. Or if a cat runs across your keyboard, it’s a bunch of events all at once! Let’s say that when the user clicks the “roll” button, we want to randomly generate a number between 1 and 20, and then show that value in our text box. We can write a function that does just that. We can even get a little fancy and say if we get the number 20, set the background color of the window to blood red! The last thing we need to do is hook this code up so that it’s triggered each time our button is clicked. To do this, we need to specify that our function “handles” this event for our button, by adding a line to our initialize function. The type of event, in this case, is a click event, and our function is the event handler for that event. Now we’re done. We can click that button all day long, and each time, our “roll D20” function gets dispatched and executed. This is exactly what’s happening behind the scenes when you press the little bold button in a text editor, or select shutdown from a dropdown menu – a function linked to that event is firing. Hope I don’t roll a 20. Ahhhh! Ok, back to the Xerox Alto! Roughly 2000 Altos were made, and used at Xerox and given to University labs. They were never sold commercially. Instead, the PARC team kept refining the hardware and software, culminating in the Xerox Star system, released in 1981. The Xerox Star extended the desktop metaphor. Now, files looked like pieces of paper, and they could be stored in little folders, all of which could sit on your desktop, or be put away into digital filing cabinets. It’s a metaphor that sits ontop of the underlying file system. From a user’s perspective, this is a new level of abstraction! Xerox, being in the printing machine business, also advanced text and graphics creation tools. For example, they introduced the terms: cut, copy and paste. This metaphor was drawn from how people dealt with making edits in documents written on typewriters. You’d literally cut text out with scissors, and then paste it, with glue, into the spot you wanted in another document. Then you’d photocopy the page to flatten it back down into a single layer, making the change invisible. Thank goodness for computers! This manual process was moot with the advent of word processing software, which existed on platforms like the Apple II and Commodore PET. But Xerox went way beyond the competition with the idea that whatever you made on the computer should look exactly like the real world version, if you printed it out. They dubbed this What-You-See-Is-What-You-Get or WYSIWYG. Unfortunately, like Engelbart’s oN-Line System, the Xerox Star was ahead of its time. Sales were sluggish because it had a price tag equivalent to nearly $200,000 today for an office setup. It also didn’t help that the IBM PC launched that same year, followed by a tsunami of cheap “IBM Compatible” PC Clones. But the great ideas that PARC researchers had been cultivating and building for almost a decade didn’t go to waste. In December of 1979, a year and a half before the Xerox Star shipped, a guy you may have heard of visited: Steve Jobs. There’s a lot of lore surrounding this visit, with many suggesting that Steve Jobs and Apple stole Xerox’s ideas. But that simply isn’t true. In fact, Xerox approached Apple, hoping to partner with them. Ultimately, Xerox was able to buy a million dollar stake in Apple before its highly anticipated I.P.O. -but it came with an extra provision: “disclose everything cool going on at Xerox PARC". Steve knew they had some of the greatest minds in computing, but he wasn’t prepared for what he saw. There was a demonstration of Xerox’s graphical user interface, running on a crisp, bitmapped display, all driven with intuitive mouse input. Steve “later said, “It was like a veil being lifted from my eyes. I could see the future of what computing was destined to be.” Steve returned to Apple with his engineering entourage, and they got to work inventing new features, like the menu bar and a trash can to store files to be deleted; it would even bulge when full - again with the metaphors. Apple’s first product with a graphical user interface, and mouse, was the Apple Lisa, released in 1983. It was a super advanced machine, with a super advanced price – almost 25 thousand dollars today. That was significantly cheaper than the Xerox Star, but it turned out to be an equal flop in the market. Luckily, Apple had another project up its sleeve: The Macintosh, released a year later, in 1984. It had a price of around 6,000 dollars today – a quarter of the Lisa’s cost. And it hit the mark, selling 70,000 units in the first 100 days. But after the initial craze, sales started to falter, and Apple was selling more of its Apple II computers than Macs. A big problem was that no one was making software for this new machine with it’s new radical interface. And it got worse. The competition caught up fast. Soon, other personal computers had primitive, but usable graphical user interfaces on computers a fraction of the cost. Consumers ate it up, and so did PC software developers. With Apple’s finances looking increasingly dire, and tensions growing with Apple’s new CEO, John Sculley, Steve Jobs was ousted. A few months later, Microsoft released Windows 1.0. It may not have been as pretty as Mac OS, but it was the first salvo in what would become a bitter rivalry and near dominance of the industry by Microsoft. Within ten years, Microsoft Windows was running on almost 95% of personal computers. Initially, fans of Mac OS could rightly claim superior graphics and ease-of-use. Those early versions of Windows were all built on top of DOS, which was never designed to run GUIs. But, after Windows 3.1, Microsoft began to develop a new consumer-oriented OS with upgraded GUI called Windows 95. This was a significant rewrite that offered much more than just polished graphics. It also had advanced features Mac OS didn’t have, like program multitasking and protected memory. Windows 95 introduced many GUI elements still seen in Windows versions today, like the Start menu, taskbar, and Windows Explorer file manager. Microsoft wasn’t infallible though. Looking to make the desktop metaphor even easier and friendlier, it worked on a product called Microsoft Bob, and it took the idea of using metaphors to an extreme. Now you had a whole virtual room on your screen, with applications embodied as objects that you could put on tables and shelves. It even came with a crackling fireplace and a virtual dog to offer assistance. And you see those doors on the sides? Yep, those went to different rooms in your computer where different applications were available. As you might have guessed, it was not a success. This is a great example of how the user interfaces we enjoy today are the product of what’s essentially natural selection. Whether you’re running Windows, Mac, Linux, or some other desktop GUI, it’s almost certainly an evolved version of the WIMP paradigm first introduced on the Xerox Alto. Along the way, a lot of bad ideas were tried, and failed. Everything had to be invented, tested, refined, adopted or dropped. Today, GUIs are everywhere and while they’re good, they are not always great. No doubt you’ve experienced design-related frustrations after downloading an application, used someone else’s phone, or visited a website. And for this reason, computer scientists and interface designers continue to work hard to craft computing experiences that are both easier and more powerful. Ultimately, working towards Engelbart's vision of augmenting human intellect. I’ll see you next week.


User interface and interaction design

 The graphical user interface is presented (displayed) on the computer screen. It is the result of processed user input and usually the main interface for human-machine interaction. The touch user interfaces popular on small mobile devices are an overlay of the visual output to the visual input.
The graphical user interface is presented (displayed) on the computer screen. It is the result of processed user input and usually the main interface for human-machine interaction. The touch user interfaces popular on small mobile devices are an overlay of the visual output to the visual input.

Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks.

The visible graphical interface features of an application are sometimes referred to as chrome or GUI (pronounced gooey).[7][8] Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows a flexible structure in which the interface is independent from and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin at will, and eases the designer's work to change the interface as user needs evolve. Good user interface design relates to users more, and to system architecture less.

Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones usually act as a user-input tool.

A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants,[9] self-service checkouts used in a retail store, airline self-ticketing and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).

By the 1980s, cell phones and handheld game systems also employed application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.



 Layers of a GUI based on a windowing system
Layers of a GUI based on a windowing system

A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.

A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, menus, pointer (WIMP) paradigm, especially in personal computers.

The WIMP style of interaction uses a virtual input device to represent the position of a pointing device, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer.

In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.

Post-WIMP interface

Smaller mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP user interfaces.[10]

As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.[11]


Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level).

There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs.

History of GUI

Early efforts

Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then new device: the mouse. In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system.

 The Xerox Star 8010 workstation introduced the first commercial GUI.
The Xerox Star 8010 workstation introduced the first commercial GUI.

The Xerox PARC user interface consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay.[12][13][14] The PARC user interface employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production.

The first commercially available computer with a GUI was the 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. In 1981, Xerox eventually commercialized the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star.[15][16] These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands.[17] Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows.[18]

Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the user interfaces used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms.

 Macintosh 128K, the first Macintosh (1984)
Macintosh 128K, the first Macintosh (1984)


GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants.[19] Despite the GUIs advantages, many reviewers questioned the value of the entire concept,[20] citing hardware limits, and problems in finding compatible software.

In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS,[21] with allusions to George Orwell's noted novel, Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems,[22] and becoming a signature representation of Apple products.[23]

Accompanied by an extensive marketing campaign,[24] Windows 95 was a major success in the marketplace at launch and shortly became the most popular desktop operating system.[25][citation needed]

In 2007, with the iPhone[26] and later in 2010 with the introduction of the iPad,[27] Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices.[28][29]

The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Palm OS-WebOS, and Firefox OS for handheld (smartphone) devices.[30][citation needed]

Comparison to other interfaces

Command-line interfaces

 A modern CLI
A modern CLI

Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. This allows greater efficiency and productivity once many commands are learned,[1][2][3] but reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands.

GUIs can be made quite hard when dialogs are buried deep in a system, or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script.

WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command line interfaces use modes only in limited forms, such as for current directory and environment variables.

Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention. The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, or File System Visualizer.

GUI wrappers

Graphical user interface (GUI) wrappers circumvent the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based user interfaces or typed command labels. While command-line or text-based application allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script.

Three-dimensional user interfaces

For typical computer displays, three-dimensional is a misnomer—their displays are two-dimensional. Semantically, however, most graphical user interfaces use three dimensions. With height and width, they offer a third dimension of layering or stacking screen elements over one another. This may be represented visually on screen through an illusionary transparent effect, which offers the advantage that information in background windows may still be read, if not interacted with. Or the environment may simply hide the background information, possibly making the distinction apparent by drawing a drop shadow effect over it.

Some environments use the methods of 3D graphics to project virtual three dimensional user interface objects onto the screen. These are often shown in use in science fiction films (see below for examples). As the processing power of computer graphics hardware increases, this becomes less of an obstacle to a smooth user experience.

Three-dimensional graphics are currently mostly used in computer games, art, and computer-aided design (CAD). A three-dimensional computing environment can also be useful in other uses, like molecular graphics, aircraft design and Phase Equilibrium Calculations/Design of unit operations and chemical processes[31].

Several attempts have been made to create a multi-user three-dimensional environment, including the Croquet Project and Sun's Project Looking Glass.


The use of three-dimensional graphics has become increasingly common in mainstream operating systems, from creating attractive interfaces, termed eye candy, to functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube which faces are each user's workspace, and window management is represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows.

Interfaces for the X Window System have also implemented advanced three-dimensional user interfaces through compositing window managers such as Beryl, Compiz and KWin using the AIGLX or XGL architectures, allowing use of OpenGL to animate user interactions with the desktop.

Another branch in the three-dimensional desktop environment is the three-dimensional GUIs that take the desktop metaphor a step further, like the BumpTop, where users can manipulate documents and windows as if they were physical documents, with realistic movement and physics.

The zooming user interface (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. It is a logical advance on the GUI, blending some three-dimensional movement with two-dimensional or 2.5D vector objects. In 2006, Hillcrest Labs introduced the first zooming user interface for television,[32]

In science fiction

Three-dimensional GUIs appeared in science fiction literature and films before they were technically feasible or in common use. For example; the 1993 American film Jurassic Park features Silicon Graphics' three-dimensional file manager File System Navigator, a real-life file manager for Unix operating systems. The film Minority Report has scenes of police officers using specialized 3d data systems. In prose fiction, three-dimensional user interfaces have been portrayed as immersible environments like William Gibson's Cyberspace or Neal Stephenson's Metaverse. Many futuristic imaginings of user interfaces rely heavily on object-oriented user interface (OOUI) style and especially object-oriented graphical user interface (OOGUI) style.[33]

See also


  1. ^ a b
  2. ^ a b
  3. ^ a b
  4. ^ "window manager Definition". PC Magazine. Ziff Davis Publishing Holdings Inc. Retrieved 12 November 2008. 
  5. ^ Greg Wilson (2006). "Off with Their HUDs!: Rethinking the Heads-Up Display in Console Game Design". Gamasutra. Retrieved February 14, 2006. 
  6. ^ "GUI definition". Linux Information Project. October 1, 2004. Retrieved 12 November 2008. 
  7. ^ The Jargon Book, "Chrome"
  8. ^ Jakob Nielsen. "Browser and GUI Chrome". 
  9. ^ The ViewTouch restaurant system by Giselle Bisson
  10. ^
  11. ^ Reality-Based Interaction: A Framework for Post-WIMP Interfaces
  12. ^ Lieberman, Henry. "A Creative Programming Environment, Remixed", MIT Media Lab, Cambridge.
  13. ^ Salha, Nader. "Aesthetics and Art in the Early Development of Human-Computer Interfaces", October 2012.
  14. ^ Smith, David. "Pygmalion: A Creative Programming Environment", 1975.
  15. ^ The first GUIs
  16. ^ Xerox Star user interface demonstration, 1982
  17. ^ "VisiCorp Visi On". The Visi On product was apparently not intended for the home user. It was designed and priced for high end corporate workstations. The hardware it required was quite a bit for 1983. It required a minimum of 512k of ram and a hard drive (5 megs of space). 
  18. ^ A Windows Retrospective, PC Magazine Jan 2009. 
  19. ^ "Magic Desk I for Commodore 64". 
  20. ^ "Value of Windowing is Questioned". 
  21. ^ Friedman, Ted (October 1997). "Apple's 1984: The Introduction of the Macintosh in the Cultural History of Personal Computers". Archived from the original on October 5, 1999. 
  22. ^ Friedman, Ted (2005). "Chapter 5: 1984". Electric Dreams: Computers in American Culture. New York University Press. ISBN 0-8147-2740-9. Retrieved October 6, 2011. 
  23. ^ Grote, Patrick (October 29, 2006). "Review of Pirates of Silicon Valley Movie". Archived from the original on November 7, 2006. Retrieved January 24, 2014. 
  24. ^ Washington Post (August 24, 1995). "With Windows 95's Debut, Microsoft Scales Heights of Hype". Washington Post. Retrieved November 8, 2013. 
  25. ^ "Computers | Timeline of Computer History | Computer History Museum". Retrieved 2017-04-02. 
  26. ^ Mather, John. iMania, Ryerson Review of Journalism, (February 19, 2007) Retrieved February 19, 2007
  27. ^ "the iPad could finally spark demand for the hitherto unsuccessful tablet PC" --Eaton, Nick The iPad/tablet PC market defined?, Seattle Post-Intelligencer, 2010
  28. ^ Bright, Peter Ballmer (and Microsoft) still doesn't get the iPad, Ars Technica, 2010
  29. ^ "The iPad's victory in defining the tablet: What it means". Infoworld. 
  30. ^ Hanson, Cody W. (2011-03-17). "Chapter 2: Mobile Devices in 2011". Library Technology Reports. 47 (2): 11–23. ISSN 0024-2586. 
  31. ^ Graphical User Interface, (GUI). "Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data. Including a Thermodinamic Review and Surfaces/Tie-lines/Hessian matrix analysis)". Institutional Repository (RUA). University of Alicante (Reyes-Labarta et al. 2015-18). 
  32. ^ November 11, 2006. Dan Moren. CES Unveiled@NY ‘07: Point and click coming to set-top boxes? Archived 2011-11-08 at the Wayback Machine.
  33. ^ Dayton, Tom. "Object-Oriented GUIs are the Future". OpenMCT Blog. Archived from the original on 10 August 2014. Retrieved 23 August 2012. 

External links

This page was last edited on 31 January 2018, at 15:24.
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.