Will your “smart” devices and applications with artificial intelligence have a legal obligation to report to you?

I just came across an interesting article “Should marketers of AI applications for psychotherapy have a Tarasov Obligation?“Answering the question in its title” yes “: Just as human psychotherapists in most countries have a legal obligation to warn potential victims of a patient if the patient says something that suggests a plan to injure the victim (this is Tarasov an obligation so named to a Case of the Supreme Court of California, 1976), so the AI ​​programs used by the patient should do the same.

This is a plausible argument – given that the obligation is recognized as a matter of national law, the court may plausibly interpret it as applicable to psychotherapists with AI as well as to other psychotherapists – but it seems to me to emphasize a broader issue. :

To what extent will various “smart” products, whether applications or cars, or Alexas, or various Internet of Things devices, be required to monitor and report potentially dangerous behavior by their users (or even their apparent “owners”) ?

To be sure, Tarasov the obligation is somewhat unusual as an obligation that is triggered even in the absence of an affirmative contribution by the defendant to the damage. Normally, a psychotherapist would have no obligation to prevent harm caused by his patient, just as you have no obligation to prevent harm caused by your friends or elderly family members; Tarasov it was a significant step beyond traditional tort law rules, although many countries have indeed taken action. I’m really skeptical of Tarasovalthough most of the judges who have considered the issue do not share my skepticism.

But in the right to tort, it is well established that people have a legal obligation to take reasonable care when doing something that can positively help someone do something harmful (this is the basis for legal claims, such as negligent trust, negligent hiring and liking). For example, the provision of a car by the car manufacturer to the driver positively contributes to the damage caused when the driver drives carelessly.

Does this mean that modern (non-self-driving) cars must – just as a matter of the general right to tort – report to the police, for example, when the driver appears to be driving chaotically in ways that are indicative of probable drunkenness? Should Alexa or Google report requests for information that appear to be aimed at finding ways to hurt someone?

Of course, there may not be such an obligation for reasons of confidentiality or, in particular, the right not to have products that someone has bought or used, to monitor and report on you. But if so, then work may need to be done, either by the legislature or by the courts, to prevent existing tort law principles from pressuring manufacturers to engage in such monitoring and reporting.

I’ve been thinking about this since mine Law on Impermissible Damage Against Privacy article, but it seems to me that the recent leap in smart devices will make these problems even more so.

Related Posts

Leave a Reply

Your email address will not be published.