© Pint of Science, 2024. All rights reserved.
Science involves both the collection of data and the mobilisation of statistical models built on data. As genomic data (an individual’s genetic code) and artificial intelligence algorithms become increasingly prevalent in science and technology, it is critical to consider the ethical and moral implications of their use.
At this event, Sasha Henriques speaks to the importance of representation in genomic datasets and whether categorising human genetic diversity is useful or relevant to genetic research. Dr Claire Benn will discuss the need to consider moral outcomes in the choice of artificia…
At this event, Sasha Henriques speaks to the importance of representation in genomic datasets and whether categorising human genetic diversity is useful or relevant to genetic research. Dr Claire Benn will discuss the need to consider moral outcomes in the choice of artificia…
Fixing Representation in Genetics, Ethics and Statistics'
Sasha Henriques
(Principal genetic counsellor at Guy’s and St Thomas’s NHS Trust (GSTT) and PhD candidate with Wellcome Connecting Science)
Scientists must navigate biases to forge ethical solutions. But can objectivity alone ensure true ethical practice? Let's discuss how we understand differences in people. The way we categorise groups like race and ethnicity can lead to unfair treatment and misunderstandings. When scientists study genes, they face a big challenge: how to make sure their findings apply to everyone, not just a few groups. Sometimes, mixing up these categories can make it harder to find the truth about genes and health. Sure, some studies have shown that ideas about race and ethnicity can be harmful. But how do we fix this problem? Are there better categories to use? Should scientists get rid of these categories altogether, or should they use them to try to make things more fair? Can scientists find a fair way to study genes that doesn't leave anyone out? It's a tough question, but it's important to talk about.
Signalling virtue: Reducing moral ambiguity in human-robot interactions
Dr Claire Benn
(Assistant Professor at the Leverhulme Centre for the Future of Intelligence)
Robots need to abide by moral constraints. One such constraint is the directive to not cause harm. In interacting with humans though, it is not always sufficient for a machine perform permissible actions: they should be unambiguously so. Systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Dr Benn will discuss a framework in which systems do not take actions that may not be discernible from forbidden behaviours, and show how this may be fit to various contexts of human-robot interaction.
Map data © OpenStreetMap contributors.
Other Brewdog events
2024-05-14
The Philosophy of Time and Self
Brewdog
15 Bene't St, Cambridge, CB2 3QN, United Kingdom
2024-05-15
Citizen Science: Research Gone Wild
Brewdog
15 Bene't St, Cambridge, CB2 3QN, United Kingdom