Ever since we could ponder, humans have wondered about the mind. How does the mind work? How do our thoughts tell us about the world around us, and are these thoughts accurate? These questions have been asked for thousands of years, and it is against this background that the field of cognitive science emerged 35 years ago.
With twentieth century developments in mathematics, logic, computing, and artificial intelligence, theorists from a variety of scientific and theoretical fields began to develop an intriguing thesis: the mind is a kind of computer. Most simply, when new information comes in to the mind, that information is processed according to a set of rules and by drawing upon relevant information from stored memory, all to deliver some kind of output. This was the basis of early modern computing, and applying these concepts to the study of the mind has proved a fruitful and exciting research agenda.
Cognitive relativism asserts the relativity of truth. Because of the close connections between the concept of truth and concepts such as knowledge, rationality, and justification, cognitive relativism is often taken to encompass, or imply, the relativity of these other notions also. Thus, epistemological relativism, which asserts the relativity of knowledge, may be understood as a version of cognitive relativism, or at least as entailed by it.
This kind of relativism can take different forms depending on the nature of the standpoint or framework to which truth is relativized. If truth is relativized to the individual subject, for instance, the result is a form of subjectivism. If the standpoint is an entire culture, the result is some form of cultural relativism. Other possible frameworks include languages, historical periods, and conceptual schemes. These frameworks do not exclude one another, of course, and in the positions developed by thinkers such as Thomas Kuhn and Michel Foucault (both generally regarded as holding relativistic views of truth) they are presented as interwoven.
A non-cognitivist theory of ethics implies that ethical sentences are neither true nor false, that is, they lack truth-values. What this means will be investigated by giving a brief logical-linguistic analysis explaining the different illocutionary senses of normative sentences. The analysis will make sense of how normative sentences play their proper role even though they lack truth values, a fact which is hidden by the ambiguous use of those sentences in our language. The main body of the article explores various non-cognitivist logics of norms from the early attempts by Hare and Stevenson to the more recent ones by A. Gibbard and S. Blackburn. Jorgensen’s Dilemma and the Frege-Geach Problem are two important aspects of this logic of norms. Jorgensen’s Dilemma is the problem in the philosophy of law of inferring normative sentences from normative sentences, which is an apparent problem because inferences are typically understood as involving sentences with truth values. The Frege-Geach Problem is a problem in moral philosophy involving inferences in embedded contexts or in illocutionary mixed sentences. The article ends with a taxonomy of non-cognitivist theories. See also Ethical Expressivism.
The cognitive sciences minor provides an interdisciplinary approach to the study of mind, brain, and cognition, broadly construed. It is designed to complement the work of majors and those who intend to pursue graduate degrees in a variety of fields, including anthropology, biology, communication studies, computer science, digital media, education, management/marketing and management information systems, medicine, philosophy, and psychology. The study of cognitive sciences can also provide a more comprehensive background for students with more immediate career goals in the fields of neural network modeling, neuroimaging, information technology, clinical or counseling psychology, biotechnology, and others.
Within the philosophy of science there have been competing ideas about what an explanation is. Historically, explanation has been associated with causation: to explain an event or phenomenon is to identify its cause. But with the growth and development of philosophy of science in the 20th century, the concept of explanation began to receive more rigorous and specific analysis. Of particular concern were theories that posited the existence of unobservable entities and processes (atoms, fields, genes, and so forth). These posed a dilemma: on the one hand, the staunch empiricist had to reject unobservable entities as a matter of principle; on the other, theories that appealed to unobservable entities were clearly producing revolutionary results. Thus philosophers of science sought some way to characterize the obvious value of these theories without abandoning the empiricist principles deemed central to scientific rationality.
What do we know about how people make moral judgments? And what should moral philosophers do with this knowledge? This article addresses the cognitive science of moral judgment. It reviews important empirical findings and discusses how philosophers have reacted to them.
Several trends have dominated the cognitive science of morality in the early 21st century. One is a move away from strict opposition between biological and cultural explanations of morality’s origin, toward a hybrid account in which culture greatly modifies an underlying common biological core. Another is the fading of strictly rationalist accounts in favor of those that recognize an important role for unconscious or heuristic judgments. Along with this has come expanded interest in the psychology of reasoning errors within the moral domains. Another trend is the recognition that moral judgment interacts in complex ways with judgment in other domains; rather than being caused by judgments about intention or free will, moral judgment may partly influence them. Finally, new technology and neuroscientific techniques have led to novel discoveries about the functional organization of the moral brain and the roles that neurotransmitters play in moral judgment.