ICDS co-hire Guido Cervone co-authors national report on ethical artificial intelligence use
Posted on June 21, 2023UNIVERSITY PARK, Pa. — From facial recognition on smart phones to digital voice assistants like Siri to tools like ChatGPT, artificial intelligence and machine learning are part of our everyday lives. Their benefits are many, but their rapid rise is also spurring questions about their risks. The ethical use of artificial intelligence and machine learning (AI/ML) in scientific research is also becoming a more visible and important consideration, according to Guido Cervone, professor of geography and meteorology and atmospheric science at Penn State.
Cervone contributed to a report on principles and best practices for the ethical use of AI/ML in Earth and space sciences research, published by the American Geophysical Union (AGU). Cervone, who serves as president of AGU’s Natural Hazard Section, was one of six members on the steering committee and one of 20 authors on the report.
AI/ML tools and methods are enabling advances in understanding the Earth and its systems at all scales, informing critical decisions by researchers, organizations and government agencies. According to Cervone, AGU’s report, “Ethical and Responsible Use of AI/ML in the Earth, Space and Environmental Sciences,” was designed to support these advances while mitigating potential risks.
“We’re collecting more data than ever on every aspect of the Universe — from Earth’s inner core to stars far outside our solar system and increasingly analyzing these data together using computational approaches,” said Brooks Hanson, executive vice president of science at AGU. “It’s an incredibly exciting time for science, but such meteoric change can bring ambiguity in how scientists carry out their work. As a trusted scientific organization, we must make sure that the endless possibilities posed by AI/ML are balanced by clear ethical standards to ensure researchers conduct their studies responsibly in a manner that will benefit the greater scientific community.”
According to AGU, ethically using AI/ML in research requires a new way of thinking about methods. For example, validation and replication are core principles of science, but this can be complicated for research utilizing AI/ML, where the inner workings of models can be opaque. Traditionally, a study should explain in detail the entire scientific process, but a study utilizing AI/ML can only document the steps in the process, not the actual computation that results. Additionally, studies utilizing AI/ML should document potential biases, risks and harms, especially as related to the promotion of justice and fairness.
“Trust is a critical topic for AI/ML research, but it’s not one we’re going to answer today,” said Cervone, who is also the associate director of Penn State’s Institute for Computational and Data Sciences. “Today’s AI/ML methods often represent knowledge in a form that is hard to verify and understand, and thus lacks some of the mechanisms that assess confidence in findings. Utilizing AI/ML requires a certain amount of trust generally not discussed with other analytical methods historically used in the Earth sciences. There are many opinions in this space so it’s clear we’ll need to continue the conversation around this in detail.”
AI/ML are powerful tools to evaluate diverse datasets, which can help Earth, space and environmental scientists uncover new insights about our planet and improve scientific predictions, including alerting communities to natural hazards, such as tornados and wildfires, or forecasting future climate-related risks, such as rising sea levels.
“When it comes to determining bias and uncertainty in datasets and models, our researchers are increasingly improving how to prepare documentation to make these details available,” said Shelley Stall, vice president of open science leadership at AGU. “Proportionally, we’re seeing a large uptick in Earth, space and environmental science research utilizing AI/ML methods. The principles identified in this report will provide ethical guidelines to inform researchers and their organizations on the importance of connecting known bias and uncertainty to decisions made about AI/ML configuration and workflows.”
Funding for this initiative was provided by NASA.