To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

Cognition is "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses".[1] It encompasses many aspects of intellectual functions and processes such as attention, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and "computation", problem solving and decision making, comprehension and production of language. Cognitive processes use existing knowledge and generate new knowledge.

The processes are analyzed from different perspectives within different contexts, notably in the fields of linguistics, anesthesia, neuroscience, psychiatry, psychology, education, philosophy, anthropology, biology, systemics, logic, and computer science.[2] These and other different approaches to the analysis of cognition are synthesised in the developing field of cognitive science, a progressively autonomous academic discipline.

YouTube Encyclopedic

  • 1/5
    Views:
    43 327
    98 031
    29 980
    3 833
    413
  • ✪ What Is Neuromorphic Computing (How AI Will Think)
  • ✪ TED: Cognitive Computing
  • ✪ Steven Pinker on How the Mind Works: Cognitive Science, Evolutionary Biology (1997)
  • ✪ Computational modeling of the brain - Sylvain Baillet
  • ✪ Lecture 2/10: Cognition as Computation [SHAIL 2012]

Transcription

Hi, thanks for tuning into Singularity Prosperity. This video is the eleventh in a multi-part series discussing computing. In this video, we'll be discussing what cognitive computing is, current cognitive computing initiatives and the impact they will have on the field of computing. [Music] The human brain is truly an amazing machine: able to operate in parallel, malleable and fault-tolerant, having 100 billion neurons, with each neuron having 100 to 1000 synapses, synapses being the connections to other neurons, this equates to 100 trillion up to 1 quadrillion synapses, all only requiring 20 watts of power in the space of 2 liters. As discussed in a previous video in this series about computing performance, the human brain is postulated to equate to 1 exaflop of performance, in other words, 1 billion calculations per second and there are many initiatives to reach this exascale performance by 2020 in supercomputers around the world. For us to simulate the brain, that being every neuron and synapse in the brain with these exascale systems will require upwards of 1.5 million processors and over 1.6 petabytes of main high-speed memory, using power in the order of megawatts per hour and taking up the space of entire buildings. All of this as compared to our brains that require just 20 watts of power in the space of 2 litres and will still outperform these machines at orders of magnitude faster. On the petaflop K-supercomputer in Japan, running Neural Simulation Technology, NEST, algorithms requires roughly 4.68 years to simulate 1 day of brain activity, that's 1,700 times slower than the brain. Japan's Post-K exaflop supercomputer aims to increase this to 310 times slower, simulating 1 day in the brain in 310 days. While these simulations will aid us in unlocking secrets of the brain, due to the vast architecture differences between modern computers and biological brains, these exascale systems will still be limited in functionality. Every computer in the world today is based upon Von Neumann architecture, having computation and memory fairly isolated with a data bus connecting them, whereas, biological systems have memory and processing tightly coupled together. While Von Neumann architecture is still the best choice for the majority of computing applications, as seen by the drastic performance differences in brain simulations, a more biologically representative architecture has to be implemented, neuromorphic architecture. First and foremost, neuromorphic architectures will allow us to accurately and in real time simulate aspects of the brain, however, while this is one goal of this new brain inspired architecture, our brains aren't the perfect machine in any regard. They get bored, distracted, fatigued, are biased and are not perfect decision makers - they can be inconsistent and prone to errors. This then leads us to another goal of neuromorphic architectures, to be paired with our devices and accelerate the field of artificial intelligence, that is to take the best aspects of the brains functionality and pair them with current computing Von Neumann architecture. This all is encompassed under heterogeneous architecture which we discussed in a previous video in this series, where multiple compute devices and architectures work in unison together. Let's look at this in terms of the two halves of the brain, the left and right brain. The left brain is focused on logic, encompassing analytical thinking, language and other such tasks. While the right brain is focused on creativity, encompassing pattern recognition, learning, reasoning and so on. The right brain is clearly more abstract than its left brain counterpart. Equating to computing, left brain tasks are best suited to be handled by traditional computers, while the right brain is what neuromorphic computing aims to handle. The left brain performance is FLOPs driven while the right brain is driven by converting senses-to-action or what some call, SOPs, synaptic operations per second. Under HSA, heterogeneous architecture, the melding of these two halves then is what will lead to truly intelligent robotics and machines, that are able to operate in real time. Computing devices based on neuromorphic architectures will be able to truly learn and reason from their inputs, especially when paired with optimized software algorithms. This has been the epitome of our discussions in this computing series, hardware and software tightly coupled together to yield massive performance and efficiency gains. One such field of computer science that has gained tremendous steam in the past decade and is the basis of how our brains operate is machine learning. By creating nodes, essentially neurons, assigning weights to them and then feeding in large sets of data, these nodes begin to interconnect amongst each other, like synapses connecting neurons, into vast neural nets. These neural nets are referred to as machine learning models which can then be applied to our devices and also continually adapt by processing more data. This was an extremely quick overview on machine learning and a much more in-depth discussion will be had in this channels AI series. Coming back to heterogeneous architecture, while neuromorphic chips paired with machine learning models will be able to learn and reason, on the Von Neumann architecture side, these traditional compute devices as we all know excel at repetitive tasks, so in this case, executing the models produced by the neuromorphic chips. Neuromorphic chips paired with traditional computing technologies are leading to a new era of computing, cognitive computing. The first step in a long road in emulating consciousness in machines. So, how are we to design hardware that resembles the human brain? Well, first let's take a brief neuroscience lesson. The basics of the composition of a neuron are: the cell body, axon and synapses. Translating to hardware terms: the cell body is the processor, axons are a data bus and synapses are the memory - with all three composed to form a neurosynaptic core. Essentially neurosynaptic cores are the nodes in machine learning neural nets but represented through physical hardware rather than software abstraction. This alone would present a significant speed up in performance but neuromorphic architecture revolutionizes computing in many other ways. As Dr. Modha an IBM Fellow working on IBM's neuromorphic chip, TrueNorth, states, "IBM's brain inspired architecture consists of a network of neurosynaptic cores. These cores are distributed and operate in parallel. They operate without a clock, in an event-driven fashion. They integrate memory, computation and communication. Individual cores can fail and yet, like the brain, the architecture can still function. Cores on the same chip communicate with one another via an on-chip event-driven network. Chips can communicate via an inter-chip interface leading to seamless availability like the cortex, enabling the creation of scaleable neuromorphic systems." Now let's decode what this wall of text means: 1) Neuromorphic computing devices will operate without a clock and in parallel. This may be the most radical departure from current computing architecture that neuromorphic architecture makes. Like with signals in the brain, neuromorphic chips will operate in a clockless fashion through an event-driven model. This is what is referred to as a 'spiking' neural network, where neurosynaptic cores are only activated when signals reach a certain activation threshold. This as compared to traditional computers that continuously run until power is shut-off. Parallel operation means that multiple neurosynaptic cores can be activated and trigger other cores at the same time, similar to how multiple neurons in the brain are always firing. This clockless, parallel architecture allows for vast decreases in energy consumption and increases in performance as we'll see later. 2) Due to the design of neuromorphic architecture, they are scalable and tolerant to faults as the brain is. If some cores stop working, the neural net can adapt and route through other cores, in the brain this is refer to as neuroplasticity. The neuromorphic chips are also designed in such a way that they can scale larger and larger. This scalability is in terms of adding additional cores on a board or interconnecting multiple boards together. This is representative of the multiple different regions of the brain working together. 3) Neurosynaptic cores are tightly coupled between memory and computation, just as the brain is. We'll cover this more in-depth in the next section as some additional background context is needed. Now that we have a basic understanding of neuromorphic architectures we can discuss the two biggest players in the race right now, IBM with TrueNorth and Intel with Loihi. IBM TrueNorth was first conceptualized on July 16th, 2004 with the goal to build brain inspired computers. 7 years later, in 2011, the first TrueNorth chip was produced, simulating 256 neurons and 262,144 synapses all in 1 neurosynaptic core. Progressing forward another 3 years, in 2014, IBM released a TrueNorth board with 1 million neurons and 256 million synapses in 4096 cores, having approximately 250 neurons and 65,000 synapses per core with performance of 46 billion SOPs per watt. This second iteration of TrueNorth was able to reduce its size 15-fold from its predecessor by using 28 nanometer transistor architecture and power consumption by 100-fold, requiring just 70 milliwatts per hour. IBM have remained fairly secretive since 2014 on the specifications of their next iteration of their TrueNorth boards, however we do know their next goal is to create a system of 4 billion neurons and 1 trillion synapses interconnected amongst 4096 TrueNorth boards and all only requiring 1 kilowatt of power. Furthermore, IBM’s stated final goal is to create, "a brain in a box", in the 2020s, consisting of 10 billion neurons and 100 trillion synapses all able to fit in the space of 2 litres and still use just 1 kilowatt of power. They say this will be achievable once transistors reach the 7 nanometer and 5 nanometer node sizes, which is already beginning to happen! As a side note, you can learn more about TrueNorth and how it will be programmed through IBM's SyNAPSE University, SyNAPSE is a software abstraction layer that IBM has developed for their architecture, similar to what CUDA is to NVIDIA GPUs. As of this year at CES 2018, Intel also entered the neuromorphic computing race with their chip codenamed, Loihi. The current specifications of this chip that are known is that it is a 130,000 neuron and 130 million synapse system, fabricated using the 14 nanometer transistor node size. Both of these neuromorphic initiatives are aimed to radically transform machine learning, allowing for real-time, low-power processing, that being: training, learning from data and inference, applying the learnt models from data on edge devices. As you can see, massive strides in neuro- morphic computing are beginning to be made, whether research and development only expected to accelerate into the 2020s. On top of these 'right brain' inspired clockless computing devices, AI ASICs and other traditional compute, Von Neumann architecture devices, will play a major role as well. To list some of the many: Intel Nervana, Intel Movidiue, Nvidia Volta tensor cores, Nvidia Drive PX, Apple A11 Bionic Neural Engine - the list can go on and on. The compute devices just listed can be considered to represent the left brain and when paired with right brain devices as we discussed earlier will produce massive performance and efficiency gains. We've already discussed some of these devices in past videos and will discuss many more in this channels AI series, self-driving series, etc in the future! [Music] Beyond the shrinking of the transistor, new materials, 3D integrated circuits and the many other innovations we've discussed in past videos in this series that will enhance the entire field of computing, one type of computing device that we haven't discussed is the memristor: So, we're focused here on brain inspired computing. The goal is not to replace humans but to take advantage of some of the tricks that brains use, and brains look very different than modern digital computers. Instead of the separated memory and processor that goes through sequentially and does an instruction at a time, brains instead look like these vast networks of neurons with extremely dense interconnections called synapses, and the kinds of operations that brains do, they do at thousands of times less energy per operation than digital computers, so we want to take advantage of some of that. We're also taking advantage of a technology that's been in development and research at HP for a number of years, memristors. So there's three parts to our work, number one we're mimicking this architecture that I just talked about, this vast network of interconnecting neurons and synapses - we're doing that with the memristor technology. Second, we're actually doing all of our computation in those memristor arrays directly, so this way we're avoiding fetching data which is very energy consuming and time consuming, instead we're bringing all of the computing to the data directly and so that's a big deal. Third, we're actually reproducing the key operations that brains appear to use which is matrix operations, a whole lot of very simple multiplications and additions. You're actually trying to collapse all of this system down into a single chip, the one that we were just seeing? Yeah, that's right, we can scale all of this hardware down to the size of roughly this chip right here! As you just saw, memristors are a technology that works exactly like the brain in terms of memory and processing on the same level to avoid data fetching, and are able to mimic brain operations, in other words, the same operations used in machine learning algorithms, matrix operations. These memristors essentially act as a streamlined neurosynaptic core that we discussed in the previous section, and function in nearly the same way: 1) they're clockless and only execute when an activation threshold is reached, 2) they're parallel, multiple different branches of memristors can execute at the same time, 3) fault tolerance, memristors model neuroplasticity in the sense that they can route around broken branches and rewrite themselves, 4) they're scalable, HP has shown that the large memristor array, the Dot-Product Engine, that we saw in the display stand, can be shrunken down into the size of a chip and interconnect amongst other chips. Integrating this new memory-compute technology into neuromorphic chips will significantly increase neuromorphic architectural performance, HP claims that memristors will yield a 100,000 times greater throughput in machine learning. It is to be noted that there are other types of non-volatile memory in development as well that mimic brain circuitry such as phase-change memory, however, memristors are the closest to commercial deployment. Another field of research that can significantly increase neuromorphic computing devices performance and efficiency is analog computing, also called 'dirty' computing since analog signals are so difficult to work with. Memristors actually already implement a form of analog computing by using a physical process to encode themselves: First, you're you're using a physical process, Ohm's law, to do a multiplication. Instead of relying on digital technology where we're having to pull all the numbers into the processor and then we have to push that result out, and here once you have set that value it's always there you pay that energy charge one time, you never have to move that weight again, and then you can use it over and over and repurpose it. So what I think of is that deep embedded system, not just the exascale, but at the complete other end of the spectrum that deep embedded system in a spacecraft, in a deep embedded system at the bottom of an oil well, something that is so hard to get to - you have this ability for this neuromorphic and neuroplastic system to be constantly changing, adjusting, learning and be that incredibly efficient engine. So I think that's what's so amazing, that sustainability of this technology! Beyond this application of analog computing, other applications include having the ability to process multiple 'senses', for example extrapolating raw signals from multiple sensors, a camera and a microphone in real-time, running them through a memristor array and activating simulated neurons. This is actually how the brain works, a mix of analog and digital signals that activate once a certain threshold is reached. It is difficult to pinpoint the trajectory the field of cognitive computing will take due to an ever-changing landscape, with exaflop simulations expected to be possible soon and more players entering the neuromorphic race every year bringing: new neuromorphic compute devices, new compute techniques, different heterogeneous architecture pairings, new AI ASICs - the list can go on and on. However, with all that being said, one thing is for certain, the 2020s will be a transformative decade, bringing new developments and research towards all three facets of cognitive computing: 1) Brain Simulation. Realistic and in real-time simulations of our brains will aid us in better understanding our bodies and mind leading to developments in mental health initiatives and neurological disorders such as Alzheimer's and ALS, cures to disease and infection, faster cures to new types of bacteria and viruses, innovations in gene editing such as CRISPR and more - many of these topics will be covered in videos focused on biotechnology in the future. 2) Artificial Intelligence. Software neural nets coupled with neuromorphic hardware which mimics how the brain functions and has high energy efficiency and performance, will radically transform and accelerate artificial intelligence initiatives. We'll see this impact of cognitive computing first in our edge devices. These first two facets of cognitive computing act as a positive feedback loop like much of technological innovation does. Brain simulations and research will lead to more advanced neuromorphic devices and architectures, leading to more advanced machine learning models, leading to better brain simulations - and it goes on and on. This then leads us to the third facet of cognitive computing, brain computer interfaces. This may sound more like science fiction, like a plot out of Black Mirror, than reality and while I agree this facet is the farthest from real-world implementation, it is an inevitability in the coming decades and preliminary work has already begun. The integration of biology and technology encompasses many subtopics such as: mind augmentation, mind transfer and uploading, artificial consciousness, cybernetics, etc. These topics as well as the ethical concerns and issues they pose are best left for future videos on this channel, but mentioned here to satisfy curiosity and show the impact that these early neuromorphic innovations happening today will have on the future! At this point the video has come to a conclusion, I'd like to thank you for taking the time to watch it! If you enjoyed it consider supporting me on Patreon to keep this channel growing and if you have any topic suggestions please leave them in the comments below! Consider subscribing for more content, follow my Medium publication for accompanying blogs and like my Facebook page for more bite-sized chunks of content! This has been Ankur, you've been watching Singularity Prosperity and I'll see you again soon! [Music]

Contents

Etymology

The word cognition comes from the Latin verb cognosco (con, 'with', and gnōscō, 'know'; itself a cognate of the Greek verb γι(γ)νώσκω, gi(g)nόsko, meaning 'I know, perceive'), meaning 'to conceptualize' or 'to recognize'.[3]

The beginnings of the studies on cognition

The word cognition dates back to the 15th century, when it meant "thinking and awareness".[4] Attention to cognitive processes came about more than eighteen centuries earlier, however, beginning with Aristotle (384–322 BC) and his interest in the inner workings of the mind and how they affect the human experience. Aristotle focused on cognitive areas pertaining to memory, perception, and mental imagery. He placed great importance on ensuring that his studies were based on empirical evidence, that is, scientific information that is gathered through observation and conscientious experimentation.[5] Two millennia later, as psychology emerged as a burgeoning field of study in Europe and then gained a following in America, other scientists like Wilhelm Wundt, Herman Ebbinghaus, Mary Whiton Calkins, and William James would offer their contributions to the study of human cognition.

Wilhelm Wundt (1832–1920) emphasized the notion of what he called introspection: examining the inner feelings of an individual. With introspection, the subject had to be careful to describe his or her feelings in the most objective manner possible in order for Wundt to find the information scientific.[6][7] Though Wundt's contributions are by no means minimal, modern psychologists find his methods to be quite subjective and choose to rely on more objective procedures of experimentation to make conclusions about the human cognitive process.

Hermann Ebbinghaus (1850–1909) conducted cognitive studies that mainly examined the function and capacity of human memory. Ebbinghaus developed his own experiment in which he constructed over 2,000 syllables made out of nonexistent words, for instance EAS. He then examined his own personal ability to learn these non-words. He purposely chose non-words as opposed to real words to control for the influence of pre-existing experience on what the words might symbolize, thus enabling easier recollection of them.[6][8] Ebbinghaus observed and hypothesized a number of variables that may have affected his ability to learn and recall the non-words he created. One of the reasons, he concluded, was the amount of time between the presentation of the list of stimuli and the recitation or recall of same. Ebbinghaus was the first to record and plot a "learning curve," and a "forgetting curve."[9] His work heavily influenced the study of serial position and its effect on memory, discussed in subsequent sections.

Mary Whiton Calkins (1863–1930) was an influential American pioneer in the realm of psychology. Her work also focused on the human memory capacity. A common theory, called the recency effect, can be attributed to the studies that she conducted.[10] The recency effect, also discussed in the subsequent experiment section, is the tendency for individuals to be able to accurately recollect the final items presented in a sequence of stimuli. Calkin's theory is closely related to the aforementioned study and conclusion of the memory experiments conducted by Hermann Ebbinghaus.[11]

William James (1842–1910) is another pivotal figure in the history of cognitive science. James was quite discontent with Wundt's emphasis on introspection and Ebbinghaus' use of nonsense stimuli. He instead chose to focus on the human learning experience in everyday life and its importance to the study of cognition. James' most significant contribution to the study and theory of cognition was his textbook Principles of Psychology that preliminarily examines aspects of cognition such as perception, memory, reasoning, and attention.[11]

In psychology

Diagram
When the mind makes a generalization such as the concept of tree, it extracts similarities from numerous examples; the simplification enables higher-level thinking (abstract thinking).

In psychology, the term "cognition" is usually used within an information processing view of an individual's psychological functions (see cognitivism),[12] and it is the same in cognitive engineering;[13] in a branch of social psychology called social cognition, the term is used to explain attitudes, attribution, and group dynamics.[12]

Human cognition is conscious and unconscious, concrete or abstract, as well as intuitive (like knowledge of a language) and conceptual (like a model of a language). It encompasses processes such as memory, association, concept formation, pattern recognition, language, attention, perception, action, problem solving and mental imagery.[14][15] Traditionally, emotion was not thought of as a cognitive process, but now much research is being undertaken to examine the cognitive psychology of emotion; research is also focused on one's awareness of one's own strategies and methods of cognition, which is called metacognition.

While few people would deny that cognitive processes are a function of the brain, a cognitive theory will not necessarily make reference to the brain or to biological processes (compare neurocognitive). It may purely describe behavior in terms of information flow or function. Relatively recent fields of study such as neuropsychology aim to bridge this gap, using cognitive paradigms to understand how the brain implements the information-processing functions (see also cognitive neuroscience), or to understand how pure information-processing systems (e.g., computers) can simulate human cognition (see also artificial intelligence). The branch of psychology that studies brain injury to infer normal cognitive function is called cognitive neuropsychology. The links of cognition to evolutionary demands are studied through the investigation of animal cognition.

Piaget's theory of cognitive development

For years, sociologists and psychologists have conducted studies on cognitive development or the construction of human thought or mental processes.

Jean Piaget was one of the most important and influential people in the field of Developmental Psychology. He believed that humans are unique in comparison to animals because we have the capacity to do "abstract symbolic reasoning." His work can be compared to Lev Vygotsky, Sigmund Freud, and Erik Erikson who were also great contributors in the field of Developmental Psychology. Today, Piaget is known for studying the cognitive development in children. He studied his own three children and their intellectual development and came up with a theory that describes the stages children pass through during development.[16]

Stage Age or Period Description
Sensorimotor stage Infancy (0–2 years) Intelligence is present; motor activity but no symbols; knowledge is developing yet limited; knowledge is based on experiences/ interactions; mobility allows child to learn new things; some language skills are developed at the end of this stage. The goal is to develop object permanence; achieves basic understanding of causality, time, and space.
Pre-operational stage Toddler and Early Childhood (2–7 years) Symbols or language skills are present; memory and imagination are developed; nonreversible and nonlogical thinking; shows intuitive problem solving; begins to see relationships; grasps concept of conservation of numbers; egocentric thinking predominates.
Concrete operational stage Elementary and Early Adolescence (7–12 years) Logical and systematic form of intelligence; manipulation of symbols related to concrete objects; thinking is now characterized by reversibility and the ability to take the role of another; grasps concepts of the conservation of mass, length, weight, and volume; operational thinking predominates nonreversible and egocentric thinking
Formal operational stage Adolescence and Adulthood (12 years and on) Logical use of symbols related to abstract concepts; Acquires flexibility in thinking as well as the capacities for abstract thinking and mental hypothesis testing; can consider possible alternatives in complex reasoning and problem solving.[17]

Common experiments on human cognition

Serial position

The serial position experiment is meant to test a theory of memory that states that when information is given in a serial manner, we tend to remember information in the beginning of the sequence, called the primacy effect, and information in the end of the sequence, called the recency effect. Consequently, information given in the middle of the sequence is typically forgotten, or not recalled as easily. This study predicts that the recency effect is stronger than the primacy effect, because the information that is most recently learned is still in working memory when asked to be recalled. Information that is learned first still has to go through a retrieval process. This experiment focuses on human memory processes.[18]

Word superiority

The word superiority experiment presents a subject with a word, or a letter by itself, for a brief period of time, i.e. 40ms, and they are then asked to recall the letter that was in a particular location in the word. By theory, the subject should be better able to correctly recall the letter when it was presented in a word than when it was presented in isolation. This experiment focuses on human speech and language.[19]

Brown-Peterson

In the Brown-Peterson experiment, participants are briefly presented with a trigram and in one particular version of the experiment, they are then given a distractor task, asking them to identify whether a sequence of words are in fact words, or non-words (due to being misspelled, etc.). After the distractor task, they are asked to recall the trigram from before the distractor task. In theory, the longer the distractor task, the harder it will be for participants to correctly recall the trigram. This experiment focuses on human short-term memory.[20]

Memory span

During the memory span experiment, each subject is presented with a sequence of stimuli of the same kind; words depicting objects, numbers, letters that sound similar, and letters that sound dissimilar. After being presented with the stimuli, the subject is asked to recall the sequence of stimuli that they were given in the exact order in which it was given. In one particular version of the experiment, if the subject recalled a list correctly, the list length was increased by one for that type of material, and vice versa if it was recalled incorrectly. The theory is that people have a memory span of about seven items for numbers, the same for letters that sound dissimilar and short words. The memory span is projected to be shorter with letters that sound similar and with longer words.[21]

Visual search

In one version of the visual search experiment, a participant is presented with a window that displays circles and squares scattered across it. The participant is to identify whether there is a green circle on the window. In the "featured" search, the subject is presented with several trial windows that have blue squares or circles and one green circle or no green circle in it at all. In the "conjunctive" search, the subject is presented with trial windows that have blue circles or green squares and a present or absent green circle whose presence the participant is asked to identify. What is expected is that in the feature searches, reaction time, that is the time it takes for a participant to identify whether a green circle is present or not, should not change as the number of distractors increases. Conjunctive searches where the target is absent should have a longer reaction time than the conjunctive searches where the target is present. The theory is that in feature searches, it is easy to spot the target, or if it is absent, because of the difference in color between the target and the distractors. In conjunctive searches where the target is absent, reaction time increases because the subject has to look at each shape to determine whether it is the target or not because some of the distractors if not all of them, are the same color as the target stimuli. Conjunctive searches where the target is present take less time because if the target is found, the search between each shape stops.[22]

Knowledge representation

The semantic network of knowledge representation systems has been studied in various paradigms. One of the oldest paradigms is the leveling and sharpening of stories as they are repeated from memory studied by Bartlett. The semantic differential used factor analysis to determine the main meanings of words, finding that value or "goodness" of words is the first factor. More controlled experiments examine the categorical relationships of words in free recall. The hierarchical structure of words has been explicitly mapped in George Miller's Wordnet. More dynamic models of semantic networks have been created and tested with neural network experiments based on computational systems such as latent semantic analysis (LSA), Bayesian analysis, and multidimensional factor analysis. The semantics (meaning) of words is studied by all the disciplines of cognitive science.[citation needed]

Recent developments

An emergent field of research, referred to as "Team Cognition", is arising in military sciences. "Team cognition" indicates “an emergent property of teams that results from the interplay of individual cognition and team process behaviors [...] [Team cognition] underlies team performance” (Arizona State University East, 2005, Cooke NJ, 2005).[23]

Metacognition

Metacognition is "cognition about cognition", "thinking about thinking", "knowing about knowing", becoming "aware of one's awareness" and higher-order thinking skills. The term comes from the root word meta, meaning "beyond".[24] Metacognition can take many forms; it includes knowledge about when and how to use particular strategies for learning or problem-solving.[24] There are generally two components of metacognition: (1) knowledge about cognition and (2) regulation of cognition.[25]

Metamemory, defined as knowing about memory and mnemonic strategies, is an especially important form of metacognition.[26] Academic research on metacognitive processing across cultures is in the early stages, but there are indications that further work may provide better outcomes in cross-cultural learning between teachers and students.[27]

Some evolutionary psychologists hypothesize that humans use metacognition as a survival tool, which would make metacognition the same across cultures.[27][need quotation to verify] Writings on metacognition date back at least as far as two works by the Greek philosopher Aristotle (384-322 BC): On the Soul and the Parva Naturalia.[28]

See also

References

  1. ^ "cognition - definition of cognition in English from the Oxford dictionary". www.oxforddictionaries.com. Retrieved 2016-02-04.
  2. ^ Von Eckardt, Barbara (1996). What is cognitive science?. Massachusetts: MIT Press. pp. 45–72. ISBN 9780262720236.
  3. ^ Stefano Franchi, Francesco Bianchini. "On The Historical Dynamics Of Cognitive Science: A View From The Periphery". The Search for a Theory of Cognition: Early Mechanisms and New Ideas. Rodopi, 2011. p. XIV.
  4. ^ Cognition: Theory and Practice by Russell Revlin
  5. ^ Matlin, Margaret (2009). Cognition. Hoboken, NJ: John Wiley & Sons, Inc. p. 4.
  6. ^ a b Fuchs, A. H.; Milar, K.J. (2003). "Psychology as a science". Handbook of psychology. 1 (The history of psychology): 1–26. doi:10.1002/0471264385.wei0101.
  7. ^ Zangwill, O. L. (2004). The Oxford companion to the mind. New York: Oxford University Press. pp. 951–952.
  8. ^ Zangwill, O.L. (2004). The Oxford companion to the mind. New York: Oxford University Press. p. 276.
  9. ^ T.L. Brink (2008) Psychology: A Student Friendly Approach. "Unit 7: Memory." p. 126
  10. ^ Madigan, S.; O'Hara, R. (1992). "Short-term memory at the turn of the century: Mary Whiton Calkin's memory research". American Psychologist. 47 (2): 170–174. doi:10.1037/0003-066X.47.2.170.
  11. ^ a b Matlin, Margaret (2009). Cognition. Hoboken, NJ: John Wiley & Sons, Inc. p. 5.
  12. ^ a b Sternberg, R. J., & Sternberg, K. (2009). Cognitive psychology (6th Ed.). Belmont, CA: Wadsworth, Cengage Learning.
  13. ^ Blomberg, O. (2011). "Concepts of cognition for cognitive engineering". International Journal of Aviation Psychology. 21 (1): 85–104. doi:10.1080/10508414.2011.537561.
  14. ^ Sensation & Perception, 5th ed. 1999, Coren, Ward & Enns, p. 9
  15. ^ Cognitive Psychology, 5th ed. 1999, Best, John B., pp. 15–17
  16. ^ Cherry, Kendra. "Jean Piaget Biography". The New York Times Company. Retrieved 18 September 2012.
  17. ^ Parke, R. D., & Gauvain, M. (2009). Child psychology: A contemporary viewpoint (7th Ed.). Boston, MA: McGraw-Hill.
  18. ^ Surprenant, A (2001). "Distinctiveness and serial position effects in total sequences". Perception and Psychophysics. 63 (4): 737–745. doi:10.3758/BF03194434. PMID 11436742.
  19. ^ Krueger, L. (1992). "The word-superiority effect and phonological recoding". Memory & Cognition. 20 (6): 685–694. doi:10.3758/BF03202718.
  20. ^ Nairne, J.; Whiteman, H.; Kelley, M. (1999). "Short-term forgetting of order under conditions of reduced interference" (PDF). Quarterly Journal of Experimental Psychology A: Human Experimental Psychology. 52: 241–251. doi:10.1080/713755806.
  21. ^ May, C.; Hasher, L.; Kane, M. (1999). "The role of interference in memory span". Memory & Cognition. 27 (5): 759–767. doi:10.3758/BF03198529. PMID 10540805.
  22. ^ Wolfe, J.; Cave, K.; Franzel, S. (1989). "Guided search: An alternative to the feature integration model for visual search". Journal of Experimental Psychology: Human Perception and Performance. 15 (3): 419–433. doi:10.1037/0096-1523.15.3.419.
  23. ^ Russo, M.; Fiedler, E.; Thomas, M.; McGhee (2005), United States Army Aeromedical Research Laboratory. Cognitive Performance in Operational Environments, North Atlantic Treaty Organization (NATO) RTO-MP-HFM-124, 14 - 3 - Open access material, PUBLIC RELEASE - ISBN 92-837-0044-9 - “Strategies to Maintain Combat Readiness during Extended Deployments – A Human Systems Approach”.
  24. ^ a b Metcalfe, J., & Shimamura, A. P. (1994). Metacognition: knowing about knowing. Cambridge, MA: MIT Press.
  25. ^ Schraw, Gregory (1998). "Promoting general metacognitive awareness". Instructional Science. 26: 113–125. doi:10.1023/A:1003044231033.
  26. ^ Dunlosky, J. & Bjork, R. A. (Eds.). Handbook of Metamemory and Memory. Psychology Press: New York.
  27. ^ a b Wright, Frederick. APERA Conference 2008. 14 April 2009. http://www.apera08.nie.edu.sg/proceedings/4.24.pdf[dead link]
  28. ^ Colman, Andrew M. (2001). "metacognition". A Dictionary of Psychology. Oxford Paperback Reference (4 ed.). Oxford: Oxford University Press (published 2015). p. 456. ISBN 9780199657681. Retrieved 2017-05-17. Writings on metacognition can be traced back at least as far as De Anima and the Parva Naturalia of the Greek philosopher Aristotle (384-322 BC) [...].

Further reading

  • Ardila, Alfredo (2018). Historical Development of Human Cognition. A Cultural-Historical Neuropsychological Perspective. Springer. ISBN 978-9811068867.
  • Coren, Stanley; Lawrence M. Ward; James T. Enns (1999). Sensation and Perception. Harcourt Brace. p. 9. ISBN 0-470-00226-3.
  • Lycan, W.G., (ed.). (1999). Mind and Cognition: An Anthology, 2nd Edition. Malden, Mass: Blackwell Publishers, Inc.
  • Stanovich, Keith (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven (CT): Yale University Press. ISBN 978-0-300-12385-2. Lay summary (PDF) (21 November 2010).

External links

This page was last edited on 15 January 2019, at 02:40
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.