PriMera Scientific Engineering (ISSN: 2834-2550)

Research Article

Volume 3 Issue 5

Unpacking the Bias Challenges of Deep Learning in Clinical Applications: A Critical Explorer of the Impact of Training

Fred Wu* and Colmenares-Diaz Eduardo

October 20, 2023

DOI : 10.56831/PSEN-03-084

Abstract

The field of artificial intelligence (AI) in healthcare is rapidly expanding worldwide, with successful clinical applications in orthopedic disease analysis and multidisciplinary practice. Computer vision-assisted image analysis has several U.S. Food and Drug Administration-approved uses. Recent techniques with emerging clinical utility include whole blood multicancer detection from deep sequencing, virtual biopsies, and natural language processing to infer health trajectories from medical notes. Advanced clinical decision support systems that combine genomics and clinomics are also gaining popularity. Machine/deep learning devices have proliferated, especially for data mining and image analysis, but pose significant challenges to the utility of AI in clinical applications. Legal and ethical questions inevitably arise. This paper proposes a training bias model and training principles to address potential harm to patients and adverse effects on society caused by AI.

References

  1. Schwab K. “The Fourth Industrial Revolution: what it means and how to respond”. World Economic Forum (2016).
  2. AI in the UK: ready, willing and able? United Kingdom: authority of the house of lords (2018). (Intelligence SCoA, editor).
  3. Ravi D., et al. “Deep learning for health informatics”. IEEE J Biomed Health Inform 21.1 (2017): 4-21.
  4. LeCun Y, Bengio Y and Hinton G. “Deep learning”. Nature 521.7553 (2015): 436-44.
  5. Price Waterhouse Cooper. Sizing the prize: What’s the real value of AI for your business and how can you capitalize? (2017).
  6. Murphy K., et al. “Artificial intelligence for good health: a scoping review of the ethics literature”. BMC medical ethics 22.1 (2021): 14.
  7. Brady A P and Neri E. “Artificial intelligence in radiology—ethical considerations”. Diagnostics 10.4 (2020): 231.
  8. WHO Consultation, Development of guidance on ethics and governance of artificial intelligence for health, Geneva, Switzerland (2019).
  9. Gibney E. “The battle for ethical AI at the world’s biggest machine-learning conference”. Nature 577.7792 (2020): 609-609.
  10. Jackson BR., et al. “The ethics of artificial intelligence in pathology and laboratory medicine: principles and practice”. Academic Pathology 8 (2021): 2374289521990784.
  11. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization; Licence: CC BY-NC-SA 3.0 IGO (2021).
  12. Kooli C and Al Muftah H. “Artificial intelligence in healthcare: a comprehensive review of its ethical concerns”. Technological Sustainability (2022).
  13. DeCamp M and Lindvall C. “Latent bias and the implementation of artificial intelligence in medicine”. J. Am. Med. Informatics Assoc 27 (2020): 2020-2023.
  14. Salim M., et al. “External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms”. JAMA Oncol 6 (2020): 1581-1588.
  15. Pianykh OS., et al. “Continuous learning AI in radiology: implementation principles and early applications”. Radiology 297 (2020): 6-14.
  16. ChenIY., et al. “Ethical machine learning in health. ligence in medicine”. J. Am. Med. Informatics Assoc (2020).
  17. NHSX. “A buyer’s guide to AI in health and care”. Radiology 286 (2018): 800-809.
  18. Hosny A., et al. “Artificial intelligence in radiology”. Nat. Rev. Cancer 18 (2018): 500-510.
  19. Hickman SE, Baxter GC and Gilbert FJ. “Adoption of artificial intelligence in breast imaging: evaluation, ethical constraints and limitations”. British journal of cancer 125.1 (2021): 15-22.
  20. Carter SM. “Valuing healthcare improvement: implicit norms, explicit normativity, and human agency”. Health Care Anal 26.2 (2018): 189-205.
  21. Cabitza F, Rasonini R and Gensini GF. “Unintended consequences of machine learning in medicine?”. J Am Med Assoc 318.6 (2017): 517-518.
  22. Holzinger A., et al. What do we need to build explainable AI systems for the medical domain?. arXiv:1712.09923 (2017).
  23. Cowls J and Floridi L. “Prolegomena to a white paper on an ethical framework for a good AI society”. SSRN (2018).
  24. O’Neil C. Weapons of math destruction. London: Penguin Random House (2016).
  25. Rogers WA. “Evidence based medicine and justice: a framework for looking at the impact of EBM upon vulnerable or disadvantaged groups”. J Med Ethics 30.2 (2004): 141-5.
  26. Carter SM., et al. “The ethical, legal and social implications of using artificial intelligence systems in breast cancer care”. The Breast 49 (2020): 25-32.
  27. Latrice G Landry., et al. “Lack of diversity in genomic databases is a barrier to translating precision medicine research into practice”. Health Affairs 37.5 (2018): 780-785.
  28. Adewole S Adamson and Avery Smith. “Machine Learning and Health Care Disparities in Dermatology”. JAMA Dermatology 154.11 (2018): 1247-1248.
  29. Stuart Geman, Elie Bienenstock and René Doursat. Neural networks and the bias/variance dilemma. Neural computation 4.1 (1992): 1-58.
  30. David Danks and Alex John London. “Algorithmic bias in autonomous systems”. In IJCAI (2017): 4691-4697.
  31. Rudzicz F and Saqur R. Ethics of Artificial Intelligence in Surgery (2020).
  32. Rice University David M. Lane. “Chapter 6 research design - sampling bias”. in Online Statistics Education: A Multimedia Course of Study, Rice University.
  33. Alexandra Olteanu., et al. “Social data: Biases, methodological pitfalls, and ethical boundaries”. Frontiers in Big Data (2019).
  34. CNN World (2016).
  35. Anupam Datta., et al. “Proxy non- discrimination in data-driven systems”. CoRR, abs/1707.08120 (2017).
  36. Burton G Malkiel. “Returns from investing in equity mutual funds 1971 to 1991”. The Journal of Finance 50.2 (1995): 549-572.
  37. Hellström T, Dignum V and Bensch S. “Bias in Machine Learning--What is it Good for?”. arXiv preprint arXiv:2004.00686 (2020).
  38. Ouyang L., et al. “Training language models to follow instructions with human feedback”. arXiv preprint arXiv:2203.02155 (2022).
  39. Gerke S, Minssen T and Cohen G. “Chapter 12—Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare”. Artificial Intelligence in Healthcare; Bohr, A., Memarzadeh, K., Eds.
  40. Currie G, Hawk KE and Rohren EM. “Ethical principles for the application of artificial intelligence (AI) in nuclear medicine”. European Journal of Nuclear Medicine and Molecular Imaging 47 (2020): 748-752.
  41. Fletcher RR, Nakeshimana A and Olubeko O. “Addressing fairness, bias, and appropriate use of artificial intelligence and machine learning in global health”. Frontiers in Artificial Intelligence 3 (2021): 561802.
  42. High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI (2019).
  43. Floridi L., et al. “AI4People - An ethical framework for a good AI society: opportunities, risks, principles, and recommendations”. Minds Mach 28.4 (2018): 689-707.
  44. D’Antonoli TA. “Ethical considerations for artificial intelligence: An overview of the current radiology landscape”. Diagnostic and Interventional Radiology 26.5 (2020): 504-511.