During the introductory session on the Liberal Arts and modern scientific methodologies, Giles posed the question to the group whether failure of a scientific theory could ever be due to moral rather than intellectual failure. Whilst Giles is the one to turn to if you’re interested in the motivation behind this question, I’d like to share some thoughts on some of the comments that ensued.
As John Coleman and David Thomson pointed out, scientists are faced with moral questions regarding the societal implications of their work – such as, for instance, their potential to be misused to the disadvantage of vulnerable members of society. A well-known example from the area of psychology is the issue of racial-ethnic differences in intelligence testing results. Whether these differences are real vs. merely artefacts of the measures employed has for all too clear reasons been subject to a very contentious debate, especially in the US (see Gottfredson, 1988, for one view on this topic). Take the development and use of prenatal screening methods as one of many further examples for how science and ethics go hand in hand. As it was also mentioned, whether or not to accept funding from sources that might put researchers into conflict of interest is another moral question that scientists have to find an answer to.
While scientific methodology is nowadays often portrayed as the one and only way to approach all our questions, this is where it fails us. Answers to moral questions such as the ones above cannot be found by going into the lab or into the field, by collecting some data and then doing some whizzy statistics on them. This is because scientific methodology provides us with a (sometimes more, sometimes less accurate) description of a phenomenon. To answer moral questions, however, descriptive accounts simply miss the point – what is needed is a prescriptive guide for action. Moral questions as to what the right course of action is are central to our being human and as prevalent in science as anywhere else, but the scientific methodology itself cannot generate the answers.
Scientific disciplines are usually acutely aware of the role that moral considerations have to play. Ethics courses are part of every psychology degree; and all experiments need to have been approved by an ethics committee before they can be carried out. Ethical guidelines in psychology are strict and cover issues such as informed consent, deception, debriefing, withdrawal from the investigation and protection from physical and mental harm. It’s reassuring to see that in many ways, the sciences acknowledge the centrality of ethical questions to their endeavour.
Nevertheless it is not be forgotten that the guidelines laid down have to somehow be agreed upon, and even when this has been achieved, one very quickly gets to the point where they are of little help. For instance, many psychological phenomena can only be studied when the participant is left in the dark as to what the point of the experiment is, and this then inevitably annuls true informed consent (see Milgram, 1963, for his experiments on obedience to authority, which illustrate why this is hugely problematic). On a case by case basis, ethical decisions have to be made. This calls for a moral theory that provides a framework for these decisions. Scientists have thereby entered the realm of normative ethics, in which utilitarianism, the Kantian categorical imperative and Aristotelian virtue ethics are discussed amongst others as models for what makes actions morally right vs. wrong. Far from being an ivory-tower or philosopher’s armchair activity, moral philosophy is indispensible, maybe especially for, modern scientists.
This discussion has admittedly taken us quite far from Grosseteste’s enquiries into the underlying structure of the natural world, as he surely wasn’t concerned with the moral acceptability of subjecting participants to experimentation. At this point I want to highlight to the non-medievalist reader that Grosseteste’s interests weren’t exclusively focused on natural phenomena. First and foremost, rather, he was a cleric, priest, and then, finally, a bishop, as he tells us. In a letter to William, bishop of Hereford, he is recommended by Gerald of Wales as a man “characterised by faith beyond all other virtues and gifts of the soul with which he excels, and conspicuous by his fidelity.” Grosseteste’s treatise De artibus liberalibus (On the Liberal Arts) underlines that scholars at the time were polymaths rather than highly specialised experts: The well-educated were well versed in grammar, logic, rhetoric, music, arithmetic, geometry, astronomy – all these applied to both natural and moral philosophy. Moreover, as the medievalists explained during the workshop, in medieval thinking beauty, the good and truth were inextricably bound together; the fragmentation between questions as to what is true and what is morally right is a phenomenon that modern thinking has first brought with it.
However appealing the idea of being such a polymath might seem to modern day scholars and scientists, the vast amounts of insights that have been aggregated over the past centuries exceed the limits of any individual human’s intellect. Nonetheless, the prospects aren’t all that bleak: thinking doesn’t have to happen in solitude, done by lone warriors, all on their own. Instead, there are set-ups like the Ordered Universe project, where scholars and scientists from across the disciplines (medieval history, physics, religion, acoustics and vision science, literature, engineering, sculpture restoration, and many more) come together, for the sake of doing justice to what a 13th century thinker wrote about the underlying principles that order the universe. In modern times, individuals cannot be polymaths any more. But, they can be part of a group whose joined knowledge truly spans the disciplines – and this then holds potential for understanding that goes beyond the sum of what individual members could achieve on their own.
While I believe that the problem of making sense of Grosseteste’s thinking can only be solved through this interdisciplinary approach, I also think that its justification should not rest exclusively on tangible outputs such as the new editions of the treatises and the accompanying scientific papers. My experience from the four workshops I have so far been lucky to attend has been that the impact of this project is much wider. Everyone involved enters unfamiliar territory, steps out of their comfort zone, and is willing to think outside the box. For sure, the good that comes out of this isn’t directly tangible and difficult to capture in metrics. But its being more diffuse doesn’t entail its being any less valuable. To give a personal example, Giles’ question and the ensuing discussion on the relation between science and morality made me realise, and maybe for the first time explicitly so, why having done an undergraduate degree in psychology and philosophy has made sense – not just because I happen to be interested in both, but because the two really do need each other.
Gottfredson, L. S. (1988). Reconsidering fairness: A matter of social and ethical priorities. Journal of Vocational Behavior, 33(3), 293-319.
Milgram, S.(1963). Behavioral Study of Obedience. Journal of Abnormal and Social Psychology, 67 (4), 371–8.