Ruminations

Blog dedicated primarily to randomly selected news items; comments reflecting personal perceptions

Saturday, May 04, 2019

Caveat Emptor

"The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information."
Samuel Finlayson, lead author, new paper on AI medical diagnoses

"Researchers have demonstrated the existence of adversarial examples for essentially every type of machine-learning model ever studied and across a wide range of data types, including images, audio, text, and other inputs."
"Adversarial attacks constitute one of many possible failure modes for medical machine-learning systems, all of which represent essential considerations for the developers and users of models alike."
"From the perspective of policy, however, adversarial attacks represent an intriguing new challenge, because they afford users of an algorithm the ability to influence its behaviour in subtle, impactful, and sometimes ethically ambiguous ways."
Research paper on A.I. systems "adversarial attacks"
In medical AI as in other applications, tiny changes to input can result in massive changes to output.
my life/Getty Images

A business model of profit over ethics where behaviour may be adopted by insurers and health care agencies to profit their bottom line appears to be one potential disadvantage to the use of Artificial Intelligence in medical diagnostic devices to further the efficiency and accuracy of submitting patients to intelligent medical devices capable of discerning their health impacts. Researchers from Harvard University and the Massachusetts Institute of Technology have studied the situation and reached a conclusion that AI devices are vulnerable to manipulation.

That manipulation is meant to impact on the accuracy of the findings of the AI devices, whether analyzing a patient to determine a lesion, for example as malignant or as benign for ulterior purposes to benefit a business entity while diminishing health outcomes for an individual. Technology has proceeded at break-neck speed to develop systems capable of identifying disease symptoms expeditiously and economically, in a wide variety of images.

The medical community looks upon these systems as aids in efficiency and accuracy, relieving physicians of burdensome tasks and speeding up processes. As various forms of artificial intelligence move through the global system through computer systems in use around the world by health care professionals, the potential for interference may seem enticing to the unscrupulous. It is the unintended consequences of use of AI in medical devices that compelled this research.

A medical scanner
Some medical AIs are easily tripped up
Science Photo Library/Getty
The prospect of "adversarial attacks", defined as manipulative steps taken to alter A.I. systems through changing a few pixels on a scan for example, whereby an A.I. system could be led to 'see' an illness that does not exist, or fail to identify one that is there in reality. The caution is meant to alert developers and regulators that such potential scenarios be recognized when building and evaluating A.I. technologies.

Complex mathematical systems increasingly drive Artificial Intelligence to learn tasks on their own through analyzing vast amounts of data; "machine learning" that occurs on an enormous scale so that it can produce unexpected behaviour all its very own. Adversarial attacks could as well fool self-driving vehicles through small changes made to street signs to dupe cars into detecting a  yield sign rather than a stop sign.

A team at New York University's Tandon School of Engineering last year created virtual fingerprints that resulted in evading fingerprint readers 22 percent of the time; 22 percent of all phones and of PCs using such readers could thus potentially be unlocked.

India has implemented the world's largest fingerprint-based identity systems, to facilitate distribution of government stipends and services; a boost to the efficiency of the bureaucracy of a government responsible for an immense population. Face-recognition access to A.T.M.s similarly are being introduced by banks -- while companies are testing self-driving cars on public roads.

If regulators build A.I. systems to evaluate new technology, cautions Samuel Finlayson, device producers could conceivably alter data to foil systems into granting regulatory approval. Once deeply rooted in the North American health care systems, argue the researchers, business could adopt behaviour that nets them the most profit, in the long run; business after all, has a tendency to manoeuvre in that direction; it's called 'free enterprise'.


Robots are increasingly utilized for medical applications.
Researchers at Harvard University and the Massachusetts Institute of Technology warn new artificial intelligence technology designed to enhance healthcare is vulnerable to misuse.
Credit: Reuters

Labels: , , , ,

0 Comments:

Post a Comment

<< Home

 
()() Follow @rheytah Tweet