To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Jacob W. Gruber

From Wikipedia, the free encyclopedia

Jacob W. Gruber
Dr. Jacob Gruber
Born(1921-02-26)February 26, 1921
DiedJune 2, 2019(2019-06-02) (aged 98)
NationalityAmerican
Alma materOberlin College
Occupation(s)Anthropologist, archaeologist, educator, historian of science

Jacob William Gruber (February 26, 1921 – June 2, 2019) was an American anthropologist, archaeologist, historian of science and educator.

YouTube Encyclopedic

  • 1/3
    Views:
    29 178
    74 617
    70 062
  • What is Cognitive Science? - Was ist Kognitionswissenschaft?
  • Light Matches with Acid
  • Make a Platinum Electrode

Transcription

Everything. Simply everything. From a philosophical point of view one distinguishes two positions within behaviourism: On the one side the methodological behaviourism - which actually is not a philosophical position, but instead the position of Skinner and Watson - and primarily wanted to distance itself from the introspection-focused psychology of the turn of the century. They were skeptical if this could satisfy scientific standards, because one could not verify anything inter-subjectively. One depends on the reports of somebody and we don't know what this person is introspecting. So the idea was to restrict, at least for the scientific psychology, the analysis to that which is observable in the behaviour. And the historic roots of this kind of cognitive science are - the way I see it - three: The first one was modern logic, which - as a discipline of philosophy - was discovered by Frege. where one for the first time had the idea that we can describe semantical relationships by purely syntactical operations. That is an often underestimated novelty. Before the discovery of modern logic no-one would have had the idea to model intelligence by symbol manipulation. That would have been completely impossible without logic. The second one is of course the work of Turing in the 30s, which showed us that there is a physical machine - at least in principle - which can calculate all functions that are algorithmically solvable by humans. That means that everything men can do with their brains in an algorithmical sense can also be done by a machine. And what does a machine do? A machine processes information. I provide an input, the machine does something, and it gives an output. The third pillar of cognitive science after the discovery of logic and after Turing, was von Neumann’s idea of how to build a computer; what a computer architecture has to look like. That was in the 50ies - and this architecture lasts until today. After that, one did not only know that there is in principle a Turing machine, but one was able to actually build it. And based on these three developments, it was almost impossible to draw a different conclusion than: Intelligence is describable by - or it is - information processing. I think that in the advent of Artificial Intelligence in the 1950s and 1960s, the focus was set on problems that humans regard as intuitively difficult. This would include solving mathematical theorems, playing chess or drawing logical conclusions. This was the paradigm that strongly influenced Artificial Intelligence in the beginning. In the 1950s, when Artificial Intelligence came into being, the prime example for human thinking was playing chess. Back then chess was found to be THE capability of the human brain. - Why? Just because it is so hard for ourselves. It simply is very, very difficult for us to anticipate the configuration of pieces multiple moves ahead. Basically, Deep Blue was a supercomputer using a relatively low-level brute force method which could calculate several steps ahead and evaluate moves and thereby beat the reigning world champion in chess. The essential difference between up-to-date chess programs which even run on laptops and do not need a supercomputer anymore and Deep Blue was mainly that in these modern programs heuristics are used in order to reduce the search space significantly, which had not been the case for Deep Blue. In contrast to computers, human beings have only a very limited amount of computing capacity. Nevertheless, humans can play chess on a very high level. This is realized by choosing from the very many possible moves only those two or three that are relevant and for which it is worthwhile to look into more deeply. And those can be used to calculate 5, 6 or 7 moves ahead. And what the human player does in this case is essentially also an assessment of the different moves and situations. In AI these assessments are normally called 'heuristics'. Playing chess is one of the disciplines which is extremely hard for humans but very easy for machines. Why is that? It's because playing chess can be formalized easily. That means you can formally write down exactly what happens in chess, and you can give that to a computer which can then use its full computing power and memory capacity, in order to calculate a huge amount of moves ahead - very fast, while remembering everything. And that is why a computer nowadays is always superior to even the most professional chess player. However, we do not have a computer which is able to control a robot in such a way that it is able to tie a knot or to tie a bow, that it is able to catch a ball or play with it - we can try this in very basic approaches, but generally we are not able to implement such abilities. Back then one thought that if we could build an artificial system which is able to play chess, then we have accomplished something of such great difficulty, so that everything else will be easy. That means that if we have a chess computer then we will definitely also have a fully intelligent robot which is completely comparable to a human being. The following fifty years showed that this is not the case at all. Comparing the situation of the beginnings of AI with the situation in AI today, or - more generally - the situation of Cognitive Science, then I think that the original idea of AI was the General Problem Solver. A rather holistic perspective was adopted, already in the beginnings, though it is significant that back then many aspects were left out, which we nowadays consider as partly constitutive for Cognitive Science. Among these aspects was for example that we can move in a motoric manner, that we can perceive things and situations, that we can take decisions very quickly and effectively in an underdetermined situation. and so on and so forth. Such aspects did not play a role for the General Problem Solver. The reason was that on the one hand these issues were not yet regarded as important problems. Motorics as a problem, perception as a problem. Secondly, a reason was that certain technical requirements were simply not available and therefore the problem was not identified as such, even though the phenomenon was already well-known in psychology and in other areas. Here, a major paradigm change took place. Today, in comparison to earlier times, the restricted definition of intelligence is expanded to a more comprehensive concept of cognition which encompasses all cognitive abilities natural agents can realize. We can see that even though it hasn't been possible so far to build truly intelligent systems, there has been great scientific progress during the last decades. Today we've gotten beyond the belief that the capability to play chess is the essential characteristic of human cognition. Now we know at least where we have to search... Starting with paradigm cases such as problem solving, deductive reasoning, planning, playing chess, and so on and so forth. By now we would actually describe the capabiltiy to draw a picture as a cognitive capacity. Even walking upright on two legs is being described as a cognitive capacity. Sixty years ago, no AI-researcher would have come up with the idea to call walking a cognitive capacity. Today, nobody is bothered by this idea. This shows that our concept of cognition has changed. What is special about neuroinformatics is that we try to build models of information processing. That means we are not so much interested in building models of the biophysical processes of single neurons, but we are interested in how these single neurons - or neurons as part of a large network - process information. One of the challenges for neuroinformatics is to describe the properties of such models. For example, it has been possible to show that these recurrent systems under certain circumstances can define a universal computer. One simple prediction of such a recurrent system is for example that it remembers things from the past for a certain time like in a short-term memory and at the same time it combines these things which stay in the short-term memory in a non-linear fashion. That means that there are representations in this system which combine properties such as the colour red and a smell to a new identity. So this is not simply the combination of colour and smell but a new entity, such as for instance red combined with a pleasant smell - the rose. We could create uncertainty by having incomplete knowledge about the process of decision making. That means that if I have an opponent (in chess), I do not know why my opponent makes a certain move. That means I cannot predict with the probability of 1 which move my opponent is going to make. The second possibility to create an unpredictable situation is that of having actual noise. Noise is an intrinsic parameter so that there is an underlying process which is completely random and influences the events. That means that on the one hand we have actual randomness in a biophysical process which, for instance, makes my neurons fire or not fire because certain synapses by chance are more likely to activate. On the other hand we have this "seeming to be random" because we have incomplete knowledge. It's a super exciting field because neuroinformatics, or to be more precise: the modern methods of neuroinformatics, are getting more and more important when it comes to data analysis. These neuroinformatical methods are also important on a second level: Nowadays we believe that the brain uses these methods itself for action planning and processing of sensory information. So the methods are important in two ways: We, as experimenters, use the methods to understand our data. But also the system we investigate uses these methods. This is why it's exciting even in a twofold sense! One of the most interesting questions in neuroinformatics is the following: Our brain is an extremely complex system which ultimately develops from an accumulation of cells. That means that there are genetic components which dictate the basic structure, but above all, self-organization plays a major role. That means that the brain has to organize its functions in an optimal way. What does optimal mean? Optimal means that the brain has to process information very efficiently and it also has to be very robust - robust concerning our daily mood, whether we have slept well or not, whether our blood sugar is low because we have worked long hours, or whether we are dehydrated - the brain has to function properly in any circumstance. This is not exactly easy because principles have to be found which structure the networks in a way that this robustness develops by itself. It is impossible to have a genetic program telling every cell "You have to be like this or like that." but you have to genetically program certain mechanisms such as neural plasticity which will automatically build structures guaranteeing optimal information processing and robustness. Our aim is to identify precisely these principles and to describe and understand these principles by mathematical means. So neuroinformatics defines the model and then it tries to describe the properties of the systems and the conditions under which these systems function optimally. Without having sensory data from the environment - take e.g. a welding robot from the automobile industry - this robot moves in its preprogrammed pattern: An auto body arrives on the assembly line and the robots puts its weld spot on the preprogrammed location, and then it moves back, a new auto body arrives and the robots puts the next weld spot... and so on and so forth. That all works very well but there is no feedback of sensory data from the environment. The robot doesn't look around in search of the car body and then decides "Oh this one doesn't look nice, I won't weld it." On the other hand if I want to have a mobile robot crossing the street at the Neumarkt, it should not crash into people and also should not get run over itself on the street. So of course it has to check - left and right - whether there is a car coming and whether there is a green traffic light. This is exactly the feedback from sensory data from the environment which leads towards cogntive robotics. The objects I can recognize in my environment: there is a bowl on the table, and this is a table - things like that - this requires cognition. The complexity of such cognitive capabilities can be understood by thinking about how to implement such capabilities in a mobile robot, based on sensory data from a camera or something which is similar to human sensory data. If you do this, you will realize that it is incredibly difficult to do what we actually do without any conscious effort. This makes it more complicated compared to single-frame processing. What makes it easier is the fact that having a single frame where something is occluded, then this aspect is occluded and I cannot recognize it. But if I have a camera on a robot and I encounter an occlusion, then I can simply move and look behind the occlusion. I cannot do this when having just a single frame. This means there are more possibilities, making it more difficult in a certain respect because I have to cope with all these possibilites but on the other hand it's also an advantage. Imagine you are Sean Connery in "The Hunt for Red October" and you are in a submarine in the Arctic Sea. Then an evil sea monster arrives which pulls out all the cables from your submarine and puts it together again in a completely random way. Now you can think about what to do... It seems like a rather hopeless situation. But Sean Connery never gives up! So what happens? You press a button and see what happens. You push the button again. You pull a lever, press the button. You pull the lever. You try to make sense out of what happens in the submarine. And the hypothesis is that this submarine scenario is similar to the situation in the brain at birth. There is no built-in knowledge about the world but it has to learn how the world functions - just like if you were thrown into such a submarine and had to cope with it. Many researchers, including me, believe that from the very beginning we have to investigate sensory data processing not as an isolated process but in the context of actions. So the action is crucial in order to understand conscious perception. Given an image or a whole sequence of images and given a knowledge base about the current situation, i.e. the current context, what I intend to do next, what I know about the environment etc. ... On the one hand there is a bottom-up process: What do I recognize in my sensory data? A table, a porcelain bowl, a piece of paper. On the other there is a top-down process, I have an expectation: I know that this is a table and if I see something over there, then it’s probably not an alligator but rather a porcelain bowl or a cup. That means I don’t only have the classical bottom-up process, but bottom-up and top-down are interconnected. If I'm standing on the side of the street, checking whether the traffic light on the other side is green - this is already a form of interpretation. You could also say that you are simply waiting for the green light to appear and then you start walking. This might be a matter of taste but I would argue that this is a complex cognitive achievement because I have to be able to extract the relevant details from the huge stream of sensory data I get, i.e. I have to interpret this relatively small green (or red) light across the street as the traffic light. And I shouldn't let myself get distracted by the red sweatshirt of the guy standing next to the traffic light. Perception is of course massively influenced by memory. Any kind of perceptual process is always very closely interconnected with accessing our memory. I can show this to you. Memory is nothing but a collection of all the experiences we have made in our life. Without memory, things such as language, thinking, solving problems wouldn't be possible. Our brain is in terms of its storage capacity almost - infinite is a big word, but - infinite. How is it possible that information from the distributed brain areas is bound together again? There is one mechanism which is to make these brain areas synchronize, i.e. they are active at the same time. Such a process of synchronization is only possible if I attend to the object. If I don't pay attention to the object, then I don't perceive it and there is no synchronization. One criterion of schizophrenia is that the patients have hallucinations. Sometimes they have visual hallucinations, but more often auditory hallucinations, i.e. they hear voices. Also olfactory hallucinations are common; patients perceive smells which are not actually present. Now we can apply what I have told you before - that brain areas synchronize when attention is directed towards a certain stimulus and a coherent object can be perceived. But since the brain is such a complex system, we can easily imagine that some of these synchronization processes can get out of hand, that something is synchronized incorrectly because minimal parameters are set in a wrong way so that brain areas synchronize that actually shouldn't synchronize. As an analogy to these three methods, we can imagine a soccer game. If you want to know something about the game, then you could for example directly ask one of the spectators in the huge stadium: "What is happening down there on the field?" Then you don't know what is happening on the field or what all the other spectators are doing, but maybe this one spectator describes the event quite well. This is analogous to single cell recording. You could also listen to the stadium loudspeakers and as soon as everyone is screaming "GOAL!", you know that a goal has been shot. This is comparable to EEG or MEG. You could also take an indirect measure by looking at the score board. If on the board, one minute later, it says 1:0 (and this would be analogous to measuring the change of oxygenated blood), then we know that a goal must have been shot. We are asking very simple questions such as: When do I look where, and why? What can be seen at the locations I look at? Are the locations I fixate - when I now look at you, for example - is this location different in comparison to the other locations which I do not attend to? Some locations I look at repeatedly. Are these locations very specific, especially important for me? And can I define these locations in terms of certain absolute properties, such as being more colourful than other locations, or are these "looking locations" more task-dependent? We conduct experiments where we measure eye movements and thereby try to answer these questions. A classical experiment looks like this: We observe the reaction of a test subject who is sitting in a darkened room and then suddenly a visual stimulus, an image, for example a face, is shown. This is not our normal situation because we look into the world continuously and we interact. That means there is not this one impulse which is processed bit by bit. It is not like an assembly line where a product is being completed step by step, but more like a round table where everybody is working simultaneously. Also those who work on more complicated stimulus properties influence others who work on the simple properties of the stimulus. I am always a bit sceptical when I look at these imaging techniques, when brain activation is marked in colours and I always wonder whether there might be too much read into these images. I can construct a very simple counter-argument: If I take a computer and solder a hundred wires with a hundred lights to the motherboard, if I now make the computer calculate something while I watch the flickering pattern of these hundred lights. I don't think that anybody would try to find out whether this PC is working on a personal income calculation or whether it is testing for prime numbers. When doing fMRI it is obvious that we are measuring an aspect of blood flow which is of course related with neuronal activity but not strictly 1:1. Strong efforts are being made to combine different methods. We want to combine EEG and fMRI or fMRI and MEG, thereby fully utilising the advantages of the different methods and compensating for the disadvantages. And therefore we have, especially right now, a flood of new results, which helps us to obtain an ever-growing understanding of how the brain functions. We have a hundred billion cells in our brain and each cell has about ten thousand synapses with other cells. This is a quadrillion of connections. What is language? If we knew this, we'd all be smarter. This is a great question... ...and I think there is no good answer to it. Think of all the things we couldn't do if we didn't have language! The bee is dancing, it is dancing in certain directions and swinging its behind. By means of this dancing and swinging, the bee can communicate to other bees in which direction and at which distance nectar can be found. Human language is much more complicated. We have more needs, we do not only want to communicate where and how far away the nectar is located, but we want a bit more. In fact we want everything! And that's where the trouble starts! This is why human language gets extremely complex. We construct algorithms, we build models which simulate the human capability for language use. This model does not tell us about the implemention of the language faculty in the brain. This brings up new questions and also new models of language. A strictly formal model is not enough for the neuro- and psycholinguist who is concerned with the implementation of language in the brain, i.e. physical processes, while formal linguistics is not concerned with physical processes but with abstract processes. Chomsky believed that a child doesn't have enough time and doesn't get enough imput in order to learn something as complex as human language. This is why he concluded that something already has to be in place when the child is born - that's what he called Universal Grammar. Noam Chomsky, in the 1950s, defined a language simply as the number of sentences which belong to this language. This is interesting just because this number of sentences is infinite. So here an exciting task for linguists arises, namely finding an algorithm which describes this infinite number. In computer science, this is quite simple: A programming language or computer language is a convention of certain instructions and control structures which we use to formulate an algorithm in such a way that a computer can process it mechanically. A computer doesn't have any problems checking a computer program in a few milliseconds. But it does have difficulty understanding or analysing a human language. On the contrary, we humans have a lot to do when trying to check a ten-page Java program for correctness. Apparently this is not in our human nature. We are able to differentiate (with our brain) the correct sentences from the incorrect ones - but we do this in a completely different way compared to the computer. If I remember correctly, Watson had massive problems with metaphorical and ironical expressions - where you had to understand implications or where things were only insinuated. The computer does indeed have the information, but the problem is finding the information which is relevant in the current interpretative context. There are certain words which exactly look like or sound like other words of the language, even though the two words have nothing to do with each other. The classical example is the word "Bank". in German. "Bank" can mean "bench", i.e. a place to sit on, or it can mean "bank", i.e. a financial institution. If I want somebody to close the door, then I can say, rather rudely, "Could you please close the door?", or I can say "Huuh, it's pretty cold in here.". "Huuh, it's pretty cold in here." is actually an expressive statement - I'm expressing how I feel. But what I actually want is that the other closes the door. The boundary between pure linguistic knowledge and world knowledge is not really clear-cut. The ambiguity, the efficiency of language and its strong context dependency - these aspects cannot easily be formalized into an algorithm. When I take my IPhone and say "Will I need an umbrella tomorrow?" and Siri shows tomorrow's weather chart and gives the written output "sunny, 25 degrees", then I think it's pointless to discuss whether Siri actually understood what I said. I don't think that Siri knows what an umbrella is. I believe that the word "understand" cannot be applied for computers, it doesn't make sense. Computers don't understand, they simply process a program. You take the audio signal, chop it up into small pieces of about 10ms, then you perform a fast Fourier transform. And you get a frequency spectrum. These frequency spectra are analyzed using a linear discriminant analysis, and as a result you get feature vectors. For each piece you get one vector. Beforehand, one has already created the feature vectors of all phonemes - phonemes are the smallest meaningful units in a language. (scraps of sound, so to speak) For all of these 40 phonemes in German, one has created feature vectors, and for each feature vector there is a multidimensional cloud in this feature space. And in the analysis, one tries to match the observed feature vectors to the most probable sequence through this feature space. A search engine such as Google is a perfect example of how to use solid engineering to achieve surprisingly good results. All the words on this planet which can be found on websites are scanned every few weeks and are stored in a data base. An efficient index is constructed and when finally somebody types the words "Osnabrück Cognitive Science", then Google will quickly find the website which matches this query best. What is Cognitive Science? ... Good question. I believe that Cognitive Science tries to analyse the processes in our bird brain and to understand human behaviour, and to use software and hardware to model human behaviour. Cognitive Science is in these times actually the most exciting research area. It is, in some way, the last boundary because natural sciences and the humanities meet here. From the clash of this encounter, or better from the emerging energy, many many things can develop. We want to find a natural basis for our human mind, thinking, cognition. The involvement of the different disciplines - psychology, biology, philosophy, linguistics, and computer science - this patchwork of disciplines which all contribute in their own way to this object of research. We need all these different disciplines in order to understand how things actually work because one single discipline is not enough to understand the complex workings of a cognitive system. Cognition is such a mulitfaceted and interdisciplinary topic which cannot be explored by one discipline alone but only by a collaboration of all the disciplines. What belongs to the area of interest and who can contribute to its exploration? Who works on the behavioroural level, maybe conceptually, philosophically or psychologically, and who is concerned with the underlying structures, maybe from a neuroscientific point of view, or even more fine grained, the underlying basis, so for instance what about the cellular level or what happens in a single cell or neuron? By taking different perspectives and approaches, one can gain more and more understanding and get an overall impression. There is the great advantage that one talks to people who have a totally different perspective - there is again the interdisciplinary aspect which requires the willingness to engage with others. And if you are willing to work together, then you often have better chances than if you work alone and never look beyond your own field. Philosophy contributes significantly with its own basic understanding of what it means to really wanting to understand something. How are the different disciplines interconnected? What does it mean to explain something reductively or to understand certain mechanisms of a phenomenon? These questions are typically posed by philosophers and not so much by someone who is expert in exactly one of the different areas. It is not easy because everybody is attached to their own methods and their own object of study and nobody just like that wants to put themselves into the shoes of another scientist who has different methods - that means to give up the own perspective and try to understand how the other thinks, how he functions, what he feels is important and unimportant... Naturally, my own things are most important to me. I pay attention to my personal priorities and I don't really want to leave my own path. That's the difficulty of interdisciplinary work. But that doesn't matter. We have to create an overall frame which motivates everyone to work together and to develop interdisciplinary projects. And there is still a lot of potential. Our students have the advantage to get in contact with all the different perspectives. We only learned part of it. But our students grow up with all the different methods and information, and they will be in an excellent position to integrate and implement all that. Working in this field is just incredibly exciting. I think now is the moment in science when the most exciting progress can be made. That is why I don't talk about interdisciplinary but antidisciplinary - I didn't come up with this concept myself but I think it's very fitting because we are within the emergence of a new discipline. I believe that the developments we will work on in the future we cannot even begin to imagine... Let's see how far we can get. English Subtitles: Simon Harst, Laura Schmitz, Jacob Huth

Biography

Gruber was born in Pittsburgh, Pennsylvania on February 26, 1921, and grew up in Akron, Ohio as one of seven children. He attended Buchtel High School and received a bachelor's degree from Oberlin College in 1942, with honors in Classical Archaeology, and was elected a member of Phi Beta Kappa. He was drafted into the United States Army upon graduation, and served as a member of the 254th Engineer Supply Company of the Persian Gulf command, based in Iran, from 1942 to 1945, and subsequently returned to Oberlin where he received his M.A. degree in Sociology and Anthropology in 1947. His thesis was Three Aspects of Early Iron Age Culture in Greece.[1]

He received his Ph.D. in anthropology at the University of Pennsylvania, under Loren Eisley. He also studied with ethnographers Frank Speck and Alfred Irving Hallowell. He spent the summers of 1954 and 1955 visiting Iroquois reservations in New York State studying ceremonial masks, staying at the Allegany Reservation near Salamanca, New York in 1955.[2] His dissertation on 19th century naturalist St. George Jackson Mivart later became the book A Conscience in Conflict: The Life of St. George Jackson Mivart. He died in June 2019 at the age of 98.[3]

History of Science

Gruber went on to become a specialist in the history of 19th-century natural sciences, writing on Thomas Henry Huxley, Charles Darwin, and especially Richard Owen, founder of the British Museum of Natural History. Over a period of nearly two decades Gruber located, redacted, and edited the extensive extant Owen correspondence.

In 1984 Gruber was a Fulbright Scholar at the Turnball Library in Wellington, New Zealand, where he researched the history of the discovery and description of the moa.[4]

Professional career

He was Emeritus Professor[5] at Temple University in Philadelphia, where he taught from 1947 through 1982. At Temple, he founded the Department of Anthropology in 1964 and served as Department Chair from 1964 to 1970, and oversaw the rapid growth of the department through the 1960s. Beginning in 1955 he developed its continuing program of archaeological field sessions through which he directed over a dozen excavations and helped pioneer the field of historic archaeology in the Northeastern United States.

From 1970 to 1973 he was Founding Director of the Temple's Liberal Arts Program in Rome, Italy. In Rome he established Temple's long running liberal arts program and instituted a program of student tours, many of which he led to southern Italy. He recruited many young Italian scholars to teach at Temple alongside their American counterparts. Following his tenure in Rome he maintained close contacts with Italy, returning often to continue ethnographic work in Calabria and, following his retirement from teaching, dividing his time between the United States and a farmhouse in Umbria that he and his wife restored.

Returning from Italy, he was appointed chairman of the Pennsylvania Historical and Museum Commission by Governor Milton Shapp and presided over the commission during the active years around the national bicentennial celebration in 1976. In 1979 Gruber presided over the dedication of any historical marker commemorating the West Chester Africa-American artist Horace Pippin.[6] In addition to archaeology, his primary field of interest is the 19th-century history of science, particularly that in Great Britain. His Ph.D. dissertation, and later book, focused on the 19th-century English natural scientist St. George Jackson Mivart. Gruber conducted many years of research on the life and work of the comparative anatomist Sir Richard Owen, an anti-Darwinist who was the founder of the British Museum of Natural History, and created a catalog of Owen's extensive correspondence.

Archaeology

Gruber began archaeological as a student at Oberlin College in 1941, when he and Prof. h. B. May led a small excavation of an Erie Indian camp-site and burial ground.[7] Gruber expanded his archaeological work while teaching at Temple University in the 1950s and continued to excavate prehistoric and historic sites through the Northeast United States through the 1970s. He developed Temple's field school program that trained student archaeologists during summer excavations and continuing artifact processing and analysis in the Anthropology Lab during the year. "The only way kids can learn about anthropology is by getting their hands dirty," he told a reporter in 1965.[1] In 1966 he led a team of American students to collaborate at the prehistoric excavation at Bylany, Czechoslovakia.[8] It was a rare educational and cultural exchange with a Communist country at that time, and forged long lasting friendships with Czech archaeologists and anthropologists.

Gruber's most extensive excavation of the 1960s was the Mohr Site, a former Susquehannock Indian village near Lancaster, Pennsylvania.[9] In the mid-1970s Gruber led Temple students to excavate Assunpink Creek Site in Mercer county, New Jersey, before the construction of a dam.[10] Gruber is recognized as one of the pioneers in historic archeology in the United States. Working with National Park Service archaeologist John L. Cotter, he conducted a series of archaeological excavations in historic Philadelphia beginning in the late 1950s during the period of urban renewal that saw the demolition of many old buildings. One of the earliest was the excavation of a privy at 315-317 Walnut Street, the site of the house of Dr. William McIlvaine.[11] He also led the excavations of the Allegheny Portage Railroad in Western Pennsylvania, the First French Settlement in the New World at Saint Croix Island, Maine, and Fort Putnam at West Point.

  • Buri Site (Birmingham, New Jersey), 1955
  • Study of Huron Ossuary Remains, ca. 1957
  • Schacht Site, Wyoming Valley, Pennsylvania, 1961
  • 315-17 Walnut Street, Philadelphia, 1962[12]
  • Mohr Site, Bainbridge, Pennsylvania, 1963 -
  • Archaeological Survey Along the Right-of-Way of FA1-1, State of Delaware, 1965 (?)
  • Bylany, Czechoslovakia (1966)
  • Antonio Site, Pennsylvania
  • St. Croix Island, Maine, 1969-70[13]
  • Assunpink Creek, New Jersey, 1976[14]
  • Fort Putnam, West Point, New York[15]

Ethnography

In the 1970s, when in Rome, Gruber worked with Italian colleagues Tullio Tentori and Carla Bianca, to explore Southern Italy, and for many years conducted ethnographic investigations in the small town of Nocara, Calabria and the surrounding region. His interest focused on religious festivals, especially the annual procession of the Madonna in Nocara.

Books

  • Jacob W. Gruber, A Conscience in Conflict: The Life of St. George Jackson Mivart, Philadelphia: Temple University Publications, New York: Columbia University Press,1960.
  • Jacob W. Gruber, (ed.), The Philadelphia Anthropological Society: Papers Presented on its Golden Anniversary, Temple University Publications. New York: Columbia University Press, 1967
  • Jacob W. Gruber and John C. Thackray, Richard Owen Commemoration: Three Studies. London: Natural History Museum Publications, 1992.

References

  1. ^ a b Tobey Ann Gordon, "Prof Teaches Mores of Man at Nearby Prehistoric 'Dig'," Jewish Exponent (Nov. 26, 1965).
  2. ^ Jacob Gruber, "Anthropology in Search of Art," Temple University Alumni Review (June 1955), 20-22
  3. ^ Ancestry LifeStory: Jacob William Gruber
  4. ^ Jacob W. Gruber, "The moa and the professionalizing of New Zealand Science," The Turnbull Library Record, 20:2 (Oct. 1987), 61-100
  5. ^ "The Department of Anthropology at Temple University". www.temple.edu. Archived from the original on 2009-11-25.
  6. ^ Chester Citizen (June 14, 1979, p 1
  7. ^ "Archeology Expedition," The Picolymp of Oberlin Collegeb, 4:2 (Dec. 1941), 10-12.
  8. ^ "Archaeology Students of to Czechoslovakia," The Montgomery Post, Norristown PA (June 8, 1966); Adolph Katz, "Temple Archaeology Team Finds Stone-Age Relics Behind Iron Curtain," The Sunday Bulletin (Sept 25, 1966).
  9. ^ Adolph Katz, "Temple's Diggers Find Indian Relics 500 Years Old," The (Philadelphia) Sunday Bulletin (June 21, 1964); "Farm Here Yields Archaeology 'Find': Graves of Shenk's Ferry People Unearthed Near Bainbridge," Lancaster New Era (June 24, 1964)
  10. ^ John Reilly, "If they can dig it, we'll know who lived here before," Sunday Times Advertiser, Trenton, NJ (June 20, 1976).
  11. ^ Joseph Rizzo, "Excavations reveal 18th-century life in Philadelphia," Temple University News (May 24, 1960).
  12. ^ "Looking back…". 24 June 2011.
  13. ^ Jacob W. Gruber, Excavations at St. Croix Island. Ms. National Park Service (1970)
  14. ^ Jacob W. Gruber, Report: the Impact of Planned Recreational Facilities On the Archaeological Resources at Site 20, Assunpink Creek, Mercer County, New Jersey (1979); Jacob W. Gruber, Archaeology at Assunpink 20, New Jersey, unpublished report, SHPO, New Jersey, 1982.
  15. ^ Jacob W. Gruber “The Ecology of Fort Putnam at West Point.” Paper read at the annual meetings of the Society for Historical Archaeology, Philadelphia (1976
This page was last edited on 21 February 2023, at 17:47
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.