Artificial Intelligence is changing the world. In a MedTech context, it enables a small team to affect many lives. To navigate AI effectively, we need to fully appreciate the ethical implications, have realistic and informed expectations of its capabilities, and understand how to utilise it to fill knowledge gaps.
Our Head of eHealth Paul Gardner, shares more on AI Ethics, Expectations and Education…
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is apparent with word embedding, a popular framework to represent textual data as vectors which has been used in many machine learning and natural language processing tasks. For example, an AI system learning from the internet would solve the equation ‘man is to king as woman is to x’ as follows:
Man: King – Women: Queen
This isn`t hugely problematic, until we consider that it would also solve the equation ‘man is to computer programmer as woman is to x’ accordingly:
Man: Computer Programmer – Woman: Homemaker
A less biased version would result in Man: Computer Programmer – Woman: Computer Programmer. Or indeed, Man: Homemaker – Woman: Homemaker. Further still, it could be argued that the entire removal of word embedding containing biases would facilitate a truly unbiased outcome.
Considering bias in AI is crucial, because without the correct decision-making process, AI can discriminate unfairly against gender, ethnicity, and minority groups. For example:
Artificial Intelligence has the potential to help us make fairer decisions – but only if we take care to work toward developing fairness within the AI systems we create.
The growing use of AI in healthcare has provoked a debate about bias and fairness. However human decision making in this domain can also be flawed, shaped by individual and societal biases that are often unconscious. The key question is: Will an AI system`s decisions be less biased than human decisions, or will the use of AI make dealing with bias even more difficult? There are two opportunities here:
Realising these opportunities will require cross-discipline collaboration to further develop technical improvements, operational practices, and ethical standards.
With human judgement still needed to ensure AI supported decision making is fair, we need to consider:
These questions can`t be resolved with an optimisation algorithm, and machines cannot be left to determine the right answers. Human judgement that draws from benchmarks set within the fields of social sciences, law and ethics is needed to develop the necessary standards.
Some recent processes and methods that aim for increased fairness include:
On an optimistic note, making progress in managing bias points from an AI perspective, may also serve to raise the benchmark in terms of standards to determine whether human decisions are fair or biased; better data, analytics, and AI could become a powerful new tool for examining human biases.
So how can we ensure bias is reduced and managed ongoing to avoid the toxic effect of reinforcing unhealthy stereotypes within the development of our AI systems for medical devices?
Use technical solutions
Ensure transparency and implement auditing processes
Strive for a diverse workforce
To enable us to make effective decisions regarding AI systems in our medical devices, it is important to have a realistic view of AI. I recently came across the “Goldilocks rule for AI”, i.e. not being too optimistic, not being too pessimistic, but balancing just the right outlook.
We shouldn`t be too optimistic with our expectations of AI. The idea of sentient / super-intelligent AI robots serves to only encourage unnecessary fears of AI and distracts from the real issues.
But we also want to avoid being overly pessimistic. The notion that AI cannot do everything, so another “AI winter” will be on its way is misinformed. The idea that AI will “take our jobs” is equally inaccurate. The McKinsey Global Institute recently published that whilst 400-800 million jobs worldwide will be displaced by AI by 2030, 555-890 million jobs will be created in that same time period as a result of AI.
So we need to strike a balance: AI can`t do everything, but it will transform the MedTech industry and society as a whole for the better – if we address its limitations effectively.
I addressed above how AI has performance limitations relating to biased data input. Additionally, it`s important to mention explainability, and the risk of adversarial attacks.
Explainability in AI presents a challenge. Much of AI can tell you something, but it cannot always explain how it knows. In fairness – how does a human explain how a pen is a pen? We just know. Techniques have however, been developed to address this challenge. Furthermore, the difficulty when using neural networks to explain how a particular decision was reached and which features in the data led to the result, can also play a role in mitigating bias. Explainability techniques could help identify whether decision-making factors reflect bias and could facilitate more accountability than in human decision making (which typically isn`t subject to such thorough probing).
The risk of Adversarial attacks on AI is very real. Whilst defences do exist, they incur cost. And similar to “spam v`s anti-spam” we may find ourselves in an arms race for some applications. As a result, cyber security measures are a crucial factor in the development of AI in medical devices. Take a look at the FDA`s recent resources on Cyber Security related to SaMD here.
Implemented responsibly – addressing ethical issues, setting expectations and managing limitations, AI will continue to fill knowledge gaps in the MedTech industry. Here are some examples:
New uses of AI & machine learning
Developers are working to apply machine learning and AI to monitor and identify epidemics, to develop virtual nursing technology, and to provide image analysis and interpretation capabilities to diagnostic solutions. These complement existing research regarding using AI to model and predict biological and chemical interactions in drug development, and other solutions delivering insights based on big data.
AI and robotics
The global medical robotics market is forecast to grow to $20 billion by 2023. 5G connectivity enabling fast and almost immediate data exchange will present new opportunities for using robots in the future digital health economy. Robots will be applied for various purposes including transportation, prescription dispensing, disinfection, communication/telepresence and even as surgical assistants. Advancements in computer and machine vision will increasingly enable robots to sense and interpret visual input, adding increased value to their use in diagnostics, surgery and more.
AI and Digital twins
Using a digital twin, a doctor can model and safely determine the possible success of a medical procedure or treatment, enabling them to make more personalised (and more effective) recommendations in therapy. AI has a strong use case in this area.
Natural Language Processing and voice control
NLP provides great use cases in predictive analytics and diagnosis, while voice activation and control could have a major impact on senior care and supporting patients with disabilities.
Artificial Intelligence will allow the MedTech industry to advance at a faster rate, facilitating solutions that will significantly improve the lives of patients. But to sustain this effective development and growth, those who develop AI systems within medical devices need to remain aware of the evolving societal context in which their medical device is being developed. When it comes to AI, you get out of it what you put in. Creating unbiased AI systems that will truly enhance medical device development will not just require skilled technical expertise, but also a combination of analytical, objective and reflective thinking, and a deep understanding of the relevant ethical considerations.
If you have an AI in MedTech challenge, our eHealth team can help. Contact us by emailing paul.gardner@congenius.ch, by filling out our contact form, or by giving us a call on +41 44 741 04 04.