Technology

Real AI threats are disinformation, bias, and lack of transparency: Stanford’s James Landay


James Landay, cofounder, Stanford Institute for Human-centred Artificial Intelligence, which focuses on studying and developing AI technology that are human focused, is not worried about superhuman intelligence or artificial general intelligence (AGI) . His concerns are more grounded and ones that are already impacting humans in negative ways. But he is also optimistic. Landay, a Computer Science professor at Stanford specialising in human-computer interaction, shared with ET’s Swathi Moorthy that generative artificial intelligence can potentially impact lives positively in the areas of healthcare and education, but only when designed to weed out the negatives and regulate it.AGI. Should we be worried?

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
IIT Delhi Certificate Programme in Data Science & Machine Learning Visit
Indian School of Business Professional Certificate in Product Management Visit
MIT xPRO MIT Technology Leadership and Innovation Visit

I think we should be worried, but not about AGI. Depending on whose def­inition you take, we have had AGI since the 1960s, in that computers do many tasks better than humans do. But if we’re talking about some kind of superhuman capabilities cover­ing almost every task human beings could do, I think we don’t need to be worried about computers taking over the world and doing something against our will. That kind of sci­ence fiction view of super intelli­gence is not what these systems are capable of and will not happen. I think it is often used by the industry as a distraction from what we should really be worried about.

Real harms

I would call this 3Ds or maybe 4Ds.

Disinformation, deepfake, and discrimination and bias. Last one, which may or may not happen, is what I would call displacement of jobs. We have not seen a lot of it yet. But we need to take this with a grain of salt because some of these tools can help people be more effective, and remove parts of the job they don’t like so they can be better.

Discover the stories of your interest


Are these problems unfixable?I suspect these models will always have some reflection of the data that they’re trained on. Some of these is­sues you can’t fix, or if you fix them, you end up with side effects that you don’t want. We’ve seen that in some of our own research. If you post your profile, I’m a gay male or I am a Nazi, and I need this health support. Depending on the profile that you use and because of those guardrails that they’ve tried to build in, it then creates nonsensical output that you might not want. So, we talk about this as delusional empathy, where they were empathetic to those who said they were a Nazi, but then not empathetic for somebody who said they were a Muslim or a gay male. Because the system is trying not to say offensive things to certain per­sonalities, but not others, causing them to give performance that you would think is wrong. So companies are trying to work on these. It is a hard problem to get right. So they may have gotten better over time. I don’t have definitive research to prove that it’s my system.

Data black box

Researchers, who are outside of these firms have a hard time being able to tell what’s going on and why. Because we don’t actually know what data is in there. These firms don’t have to tell us what data they trained. I think that’s a big problem. If there is going to be any regulation of these systems, a basic level of transparency would be required.

An alternative

One way around this is if academia can have the resources to build some of these large models them­selves in a more open-source way, where we only show what the data is, and understand why certain ef­fects were happening. The problem we have right now is these systems only exhibit the important behav­iours at scale. In academia, we can’t even do it. GPT 3.5 was trained on probably thousands of GPUs. In the top academic institutions, the high­est number of GPUs, you’ll see is somewhere in the 800 to 1000 proces­sors. This gap is a problem for re­search. I think governments are go­ing to be the only ones who can solve this and make sure that the nonprofit sector, like academia, can actually contribute in this area and understand what’s going on. Because the companies have incen­tive to keep it close now, as now it’s a very competitive business.

How will GenAI play out in India, other countries?

You have to watch out for how this technology is being sold. A lot of technologies like this are said to de­mocratise healthcare or education. And the fact is, technologies tend not to do that. They tend to go to the rich and highly educated first. Unless we are actually explicit, and actually work on those problems, it won’t happen by itself. The real warning I have is if we really want it to be equaliser, to be level, to be fair, we actually have to attack those explicitly.

Lot of positives, but…

I think this technology is going to change our lives in very positive ways. But it will only do that with­out a lot of negatives, if we design for it, think about it, and regulate it. So, we have to do it with our eyes wide open and cognizant of how we want it to be.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.