AI "robot" lawyer will represent defendant in a US courtroom in February.

army judge

Super Moderator
Jurisdiction
Other
The first artificial intelligence-powered lawyer is going to court next month.

The CEO of legal services company DoNotPay tweeted about it in January. Joshua Browder wrote "On February 22nd at 1.30PM, history will be made. For the first time ever, a robot will represent someone in a US courtroom. DoNotPay A.I will whisper in someone's ear exactly what to say. We will release the results and share more after it happens. Wish us luck!"

Experts say there are pros and cons to this idea.

"I can see some benefits of having tools that can assist people as they go to court for traffic tickets and other violations because, unfortunately, access to justice is a real problem in our society," said Lee Tiedrich, a visiting professor of the practice of law at Duke University School of Law.

On the other hand, she said there are some risks. Bots don't have an education and they're not like a lawyer, who is subject to certain ethical guidelines.

"The machine is not unbiased. It learned from somewhere, so the output kind of depends on a lot of the data that it found," said Kerstin Hering, the director of the Humane Robot Technology Lab at the University of Denver.

AI has been in the news a lot lately from AI-generated art to AI programs like ChatGPT, which has passed an MBA exam.

"It was not accessible to a broader population and now it is and it makes sense. It is very good at what it does," Hering said about the growing use of artificial intelligence, which has been around since the 1950s in some capacity. "The bottom line is 'What are humans doing with this?'."

Experts say new tech can also be used in the wrong ways. As it evolves, regulations will have to evolve, too.

"I'm pretty confident we're going to start seeing more AI laws and regulations within the next year," said Tiedrich. "It's forcing us as a society to figure out how can we create structures, legal policy structures, business norms, that enable us to capitalize on the benefits and mitigate the risks."

download-1.jpeg...
... ... ... ...

https://www.kxxv.com/news/national/ai-powered-lawyer-to-defend-in-court-next-month
...


Save Time and Money with DoNotPay!
...
 
How is this legal?

I'm guessing, but I suspect if enough palms get greased, almost anything is allowed.

It also might have to do with the application of logic that APPEARS to correct a wrong.

An example is provided in the article, "making JUSTICE accessible to ALL, not just the financially blessed; while forgetting the financially distressed.

I don't happen to agree with the logic, but I understand the approach.
 
It's probably illegal most places but this is for a traffic ticket and I'm guessing permission was obtained from the court similar to getting permission for a trial by zoom or phone.

Also possible that the defendant agreed to abide by the decision even if it went against him, or the court agreed to waive the decision and allow a retrial if things didn't work out.

Could also be a Judge Judy type of trial. The where and the who are not specified.
 
How is this legal?

It almost certainly isn't. Just because the AI vendor is setting up the tech to do it, it may well be that the judge will rule in favor of any objection the opposing party has to it. Moreover, the judge could on his/her own eject the chatbot either because the person whispering the answers is not licensed to practice law or because the process slows down the court proceedings.

Furthermore, I don't think the CEO of the AI machine company understands the complexities of trial and the difficulty that an AI program will have in responding in real time to a fluid situation as the trial unfolds. For example, I can already forsee the AI having a problem with raising proper objections and offering good responses to objections by the other side. And I can see problems in examining witnesses, too, either in direct exam or cross exam. It's not good enough to simply have a large knowledge base that the AI can quickly search to answer factual questions. Facts are only part of what a lawyer deals with in presenting a case.
 
It all sounds plausible.

I hope it works, as long as it helps those in need.

For a very simple hearing to a judge it might work out. For anything more complex than that, I foresee real problems. AI technology is not yet at the point where it can replace humans for things like this. And it may never get to that point, because a lot depends on getting a read of what your judge and jury are like, and chatbots aren't going to be able to do a good job sorting that out. Human emotions play a significant part of a lot of trials, and computers don't understand emotions as we do as they don't experience them.
 
For a very simple hearing to a judge it might work out. For anything more complex than that, I foresee real problems. AI technology is not yet at the point where it can replace humans for things like this. And it may never get to that point, because a lot depends on getting a read of what your judge and jury are like, and chatbots aren't going to be able to do a good job sorting that out. Human emotions play a significant part of a lot of trials, and computers don't understand emotions as we do as they don't experience them.

You illustrate a few of the potential problems for the BOT.

It appears plausible, but plausibility isn't necessarily reality.

I'll follow the upcoming trial to determine further utility of the application.

As I see it now, what the developer has appears to be the prototype.

Prototypes aren't always successful, even with additional fiddling and fixing.
 
As I see it now, what the developer has appears to be the prototype.

All of these generalized chatbot programs that claim to be able to answer anything are still in the development stages. And while the results of some of the answers are pretty good considering everything that a computer has to do to find relevant material and then write a coherent response without itself violating copyright law, so far significant flaws have been found with each them. They have a way to go before realizing what the developers see as the potential for the program. One of the noted problems is that the chat bots tend to adopt the biases of the developers.
 
Back
Top