このアイテムのアクセス数: 122
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
jps_569_23.pdf | 2.55 MB | Adobe PDF | 見る/開く |
タイトル: | 道徳起源論から進化倫理学へ (三) |
その他のタイトル: | From the Origin of Morality to the Evolutionary Ethics, part II : Reductionism in the Normative Ethics, continued |
著者: | 内井, 惣七 ![]() |
著者名の別形: | Uchii, Soshichi |
発行日: | 10-Apr-2000 |
出版者: | 京都哲学会 (京都大学文学部内) |
誌名: | 哲學研究 |
巻: | 569 |
開始ページ: | 23 |
終了ページ: | 70 |
抄録: | This issue will conclude part II and the whole paper. My reductionistic program in the normative ethics aims at constructing “the morality as it ought to be” on the basis of rational choices. Its basic idea can be stated as follows: The morality as we actually have is not necessarily systematic; it is a bunch of norms, duties, and values which may sometimes collide with each other; and personal preferences are often biased and therefore may lead to differences of moral preferences and moral judgments. However, if we consider on the basis of rational preferences, not on actual preferences, such biases and differences may be removed to a considerable degree, and systematization of norms and values may become possible. The morality as it ought to be is the one which can be justified, in the sense that it can be accepted, on such a rational basis. Thus if we can use the notion of rationality that does not presuppose morality, reductionism can be maintained. However, rationality has many interpretations, the most serious difference being between (1) rationality with full information and (2) rationality with limited information and capacity. It is well known that Herbert Simon drew our attention to this distinction, and he emphasized the importance of (2), which he named “bounded rationality”, and in which “satisficing” principle replaces “maximizing” principle in (1). Thus the first task for us is to examine this distinction and see whether there is a possibility for conciliating the two. On my diagnosis, although models of bounded rationality are far more successful as a means for describing our actual decisions, the significance of the maximization principle reappears if we try to improve bounded rationality or its decisions, and to look for a systematization of such decisions which may often be incoherent, fragmentary, or ad hoc. Simon himself admits that “satisficing” process can be transformed into “maximization” process if we allow extended framework and extra cost for calculation. Thus my proposal is that we allow improvement of rationality, although we always stick to the condition of “boundedness” of our rationality; maximization principle then works at the level of improvement, prescribing to choose a better alternative between any two available at that level. However, such improvement is always local, as is the case with evolution by natural selection; but that is all we can do as a finite being. I will illustrate the significance of my proposal in terms of my criticism of Dennett's “moral first aid”, which is his version of the defence of bounded rationality in morals. Since we stick to the condition of boundedness, his criticism against high-minded ethics is well taken. But as long as we are committed to improvement in moral matters, as Dennett himself seems to be committed to, the significance of maximization is revived. We should aim at maximization via local improvement, although there is no guarantee of realizing global maximization. Now, in order to reconstruct normative ethics on this basis of bounded-but-improvable rationality, we turn our attention to “the universalizability of moral judgment” and “the fair treatment of everyone's good” (these are two of the most crucial conditions for traditional morality). How can our reductionist program derive these normative conditions or their near-equivalents? As I see it, the universalizability can be accepted as our rationality is improved in social life. Morality is a form of “reciprocal altruism” as discussed in recent evolutionary biology. But this reciprocity does not demand strict universality, only reciprocity within the group to which one belongs. However, this in-group reciprocity has a room for improvement, as long as there may be a possibility for gaining more by extending social intercourse beyond the group: it has its own “opportunity cost”. Thus by extending our social relationships beyond the group, we can obtain a greater benefit; but for that, we have to extend the scope of moral consideration and this elicits “more” universalizability, and this in turn demands an improvement of our rationality (extending our knowledge, consideration, and calculation a bit further). This speculation can be supported by recent empirical investigations by T. Yamagishi. He focused on the notion of “trust”, as contrasted with “committment”, and its role in our social life under uncertainty. He argues that while “committment”-relation is a means for responding to social uncertainty by in-group favoritism, “trust” -relation is another means in terms of extending our relationships to new partners. And he argues that “trust” can be developed where both social uncertainty and opportunity cost are high, because there are possibilities for obtaining a greater benefit by trusting others under such conditions. He also argues that one needs higher “social intelligence” in order to utilize this “trust”-relation. Thus, although his subject is more concrete and specific than our problem of universalizability, we can apply his scenario to our case, and I point out that the development of “social intelligence” is an important factor for improving our bounded rationality. We can also apply the same idea to “the fair treatment of everyone's good”, which is a more important and substantive condition for morality. The point can be illustrated in terms of the incident of “Enola Gay” exhibit at the Smithsonian Air and Space Museum in 1995. In order to give a fair consideration for everyone's good (that is, “everyone” involved in a given case), we have to give an appropriate weight to everyone's good, in addition to universalizing our consideration; and how should we do this? In order to assess the significance of the two atomic bombs exploded on Hiroshima and Nagasaki, we have to give an appropriate weight to a Japanese life and an American soldier's life, among many other factors (such as the escalation of “strategic bombing” or the post-war armament race between US and USSR) involved in the case; but how should we do this? You can stick to your prejudice (e.g. the myth of “a million lives saved by the bombs”), or you may give a very small weight to 210, 000 lives lost in Hiroshima and Nagasaki (by the end of 1945). But a far better alternative is to examine the relevant facts involved (as the Smithsonian staff tried to do), to represent the victims's sufferings as well as American soldiers's agony (thus appealing to our imagination and sympathy), and so on so forth, and then, in light of all this, to decide your opinion. Such a decision is made in the light of improved rationality. And don't say it is impossible for bounded rationality; the Smithsonian staff provided many good materials by their long labor, and that's a way to improve bounded rationality. And we can expect a greater benefit by going out of in-favoritism and exclusive consideration. Of course there is a hard question of “interpersonal comparison” of good or preference. For this, I propose a conventionalism coupled with revision in light of improved rationality: initial weights given to different people's good may be conventionally chosen by each individual, but they can be revised in light of improved rationality. The interpersonal comparison can be made possible by a convention, and if it is unacceptable to many people, it can be changed and hopefully improved, by means of mutual criticism and in light of improved rationality; thus my approach is evolutionary to this extent. A summary and a prospect of reductionism conclude the paper. |
DOI: | 10.14989/JPS_569_23 |
URI: | http://hdl.handle.net/2433/273768 |
出現コレクション: | 第569號 |

このリポジトリに保管されているアイテムはすべて著作権により保護されています。