Topical Article

Ethics, Expectations & Education: What lies ahead for AI in medical devices?

Posted on by Congenius

Artificial Intelligence is changing the world. In a MedTech context, it enables a small team to affect many lives. To navigate AI effectively, we need to fully appreciate the ethical implications, have realistic and informed expectations of its capabilities, and understand how to utilise it to fill knowledge gaps.

Our Head of eHealth Paul Gardner, shares more on AI Ethics, Expectations and Education…

AI & Ethics: Managing Bias

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is apparent with word embedding, a popular framework to represent textual data as vectors which has been used in many machine learning and natural language processing tasks. For example, an AI system learning from the internet would solve the equation ‘man is to king as woman is to x’ as follows:

Man: King – Women: Queen

This isn`t hugely problematic, until we consider that it would also solve the equation ‘man is to computer programmer as woman is to x’ accordingly:

Man: Computer Programmer – Woman: Homemaker

A less biased version would result in Man: Computer Programmer – Woman: Computer Programmer. Or indeed, Man: Homemaker – Woman: Homemaker. Further still, it could be argued that the entire removal of word embedding containing biases would facilitate a truly unbiased outcome.

Considering bias in AI is crucial, because without the correct decision-making process, AI can discriminate unfairly against gender, ethnicity, and minority groups. For example:

  • A large company recently discovered that their AI Hiring tool discriminated against women
  • Facial recognition technology has historically worked better for light skinned than dark skinned individuals
  • Using AI for bank loan approvals has in some cases proven to discriminate against ethnic groups

Addressing Unconscious Bias

Artificial Intelligence has the potential to help us make fairer decisions – but only if we take care to work toward developing fairness within the AI systems we create.

The growing use of AI in healthcare has provoked a debate about bias and fairness. However human decision making in this domain can also be flawed, shaped by individual and societal biases that are often unconscious. The key question is: Will an AI system`s decisions be less biased than human decisions, or will the use of AI make dealing with bias even more difficult? There are two opportunities here:

  1. Use Artificial Intelligence to identify and reduce the effect of human biases
  2. Improve AI systems themselves (how they leverage data, how they are developed, deployed, and used) to prevent them from reinforcing human and societal biases or creating bias of their own.

Realising these opportunities will require cross-discipline collaboration to further develop technical improvements, operational practices, and ethical standards.

How do we define and measure what`s “fair”?

With human judgement still needed to ensure AI supported decision making is fair, we need to consider:

  1. At what point in the development process is human judgement needed?
  2. In what form is this judgement required?
  3. Who decides when an AI system has sufficiently minimised bias?
  4. In which, if any, situations should fully automated decision making be acceptable?

These questions can`t be resolved with an optimisation algorithm, and machines cannot be left to determine the right answers. Human judgement that draws from benchmarks set within the fields of social sciences, law and ethics is needed to develop the necessary standards.

Some recent processes and methods that aim for increased fairness include:

  • Data sheets for data sets
  • Model cards for model reporting
  • Impact assessments
  • Pre-deployment fairness audits

On an optimistic note, making progress in managing bias points from an AI perspective, may also serve to raise the benchmark in terms of standards to determine whether human decisions are fair or biased; better data, analytics, and AI could become a powerful new tool for examining human biases.

Combatting bias

So how can we ensure bias is reduced and managed ongoing to avoid the toxic effect of reinforcing unhealthy stereotypes within the development of our AI systems for medical devices?

Use technical solutions

  • Utilise the process of “zeroing out” the bias in words during development (not using embedded words with bias)
  • Use less biased and more inclusive data (e.g. using xx instead of yy)

Ensure transparency and implement auditing processes

  • By improving the transparency of our systems, and implementing robust auditing processes, we will make it easier to recognise bias, and improve our systems accordingly

Strive for a diverse workforce

  • If our teams are diverse, our capacity to achieve a truly diverse perspective will naturally broaden

Managing Expectations

To enable us to make effective decisions regarding AI systems in our medical devices, it is important to have a realistic view of AI. I recently came across the “Goldilocks rule for AI”, i.e. not being too optimistic, not being too pessimistic, but balancing just the right outlook.

We shouldn`t be too optimistic with our expectations of AI. The idea of sentient / super-intelligent AI robots serves to only encourage unnecessary fears of AI and distracts from the real issues.

But we also want to avoid being overly pessimistic. The notion that AI cannot do everything, so another “AI winter” will be on its way is misinformed. The idea that AI will “take our jobs” is equally inaccurate. The McKinsey Global Institute recently published that whilst 400-800 million jobs worldwide will be displaced by AI by 2030, 555-890 million jobs will be created in that same time period as a result of AI.

So we need to strike a balance: AI can`t do everything, but it will transform the MedTech industry and society as a whole for the better – if we address its limitations effectively. 

Addressing the limitations of AI

I addressed above how AI has performance limitations relating to biased data input. Additionally, it`s important to mention explainability, and the risk of adversarial attacks.

Explainability in AI presents a challenge. Much of AI can tell you something, but it cannot always explain how it knows. In fairness – how does a human explain how a pen is a pen? We just know. Techniques have however, been developed to address this challenge. Furthermore, the difficulty when using neural networks to explain how a particular decision was reached and which features in the data led to the result, can also play a role in mitigating bias. Explainability techniques could help identify whether decision-making factors reflect bias and could facilitate more accountability than in human decision making (which typically isn`t subject to such thorough probing).

The risk of Adversarial attacks on AI is very real. Whilst defences do exist, they incur cost. And similar to “spam v`s anti-spam” we may find ourselves in an arms race for some applications. As a result, cyber security measures are a crucial factor in the development of AI in medical devices. Take a look at the FDA`s recent resources on Cyber Security related to SaMD here.

Education: How to use digital health approaches to address knowledge gaps

Implemented responsibly – addressing ethical issues, setting expectations and managing limitations, AI will continue to fill knowledge gaps in the MedTech industry. Here are some examples:

New uses of AI & machine learning

Developers are working to apply machine learning and AI to monitor and identify epidemics, to develop virtual nursing technology, and to provide image analysis and interpretation capabilities to diagnostic solutions. These complement existing research regarding using AI to model and predict biological and chemical interactions in drug development, and other solutions delivering insights based on big data.

AI and robotics

The global medical robotics market is forecast to grow to $20 billion by 2023. 5G connectivity enabling fast and almost immediate data exchange will present new opportunities for using robots in the future digital health economy. Robots will be applied for various purposes including transportation, prescription dispensing, disinfection, communication/telepresence and even as surgical assistants. Advancements in computer and machine vision will increasingly enable robots to sense and interpret visual input, adding increased value to their use in diagnostics, surgery and more.

AI and Digital twins

Using a digital twin, a doctor can model and safely determine the possible success of a medical procedure or treatment, enabling them to make more personalised (and more effective) recommendations in therapy. AI has a strong use case in this area.

Natural Language Processing and voice control

NLP provides great use cases in predictive analytics and diagnosis, while voice activation and control could have a major impact on senior care and supporting patients with disabilities.

In conclusion…

Artificial Intelligence will allow the MedTech industry to advance at a faster rate, facilitating solutions that will significantly improve the lives of patients. But to sustain this effective development and growth, those who develop AI systems within medical devices need to remain aware of the evolving societal context in which their medical device is being developed. When it comes to AI, you get out of it what you put in. Creating unbiased AI systems that will truly enhance medical device development will not just require skilled technical expertise, but also a combination of analytical, objective and reflective thinking, and a deep understanding of the relevant ethical considerations.

If you have an AI in MedTech challenge, our eHealth team can help. Contact us by emailing [email protected], by filling out our contact form, or by giving us a call on +41 44 741 04 04.

Related News & Knowledge

×

Get in touch

If you have a challenge that you think we could help with, please feel free to get in touch in a way that suits you best. We look forward to speaking with you!

Get in touch

×

Request
a demo

Find out more about QMgeniuS by requesting a demo.

Simply fill out your details and click “Request a demo", then a member of the team will get back to you shortly.

Alternatively, feel free to give us a call on +41 44 741 04 04 to start the conversation. We look forward to hearing from you!

    ×

    Subscribe to our
    monthly knowledge update

    Stay informed and up to date with the latest industry news delivered direct to your inbox. You can tailor your preferences to prioritise what you'd like to hear about each month; be it MedTech news headlines, fact sheet resources on the latest regulations or longer articles covering timely topics across the wider MedTech industry.

    By clicking subscribe, you are signing up to receive a monthly newsletter from us containing MedTech news, industry insights and more from Congenius. Subscribing also gives you full access to all topical content on our website. For information on how your data is managed, see our privacy policy.