Categories

See More
Popular Forum

MBA (4887) B.Tech (1769) Engineering (1486) Class 12 (1030) Study Abroad (1004) Computer Science and Engineering (988) Business Management Studies (865) BBA (846) Diploma (746) CAT (651) B.Com (648) B.Sc (643) JEE Mains (618) Mechanical Engineering (574) Exam (525) India (462) Career (452) All Time Q&A (439) Mass Communication (427) BCA (417) Science (384) Computers & IT (Non-Engg) (383) Medicine & Health Sciences (381) Hotel Management (373) Civil Engineering (353) MCA (349) Tuteehub Top Questions (348) Distance (340) Colleges in India (334)
See More

What would be an argument to restrict AI instead of interacting with it (or he/she)? [closed]

General Tech Learning Aids/Tools

Max. 2000 characters
Replies

usr_profile.png

User

( 4 months ago )

I am planning to make a world, of which one portion of the populace are two-legged animals, and the remaining portion is human. They have just reached the point where an AI is just as smart as them (AGI).

The other two-legged animals do face discrimination (like in Skyrim), so that may affect how they treat AIs. There is also a dispute about how AIs should be treated.

I have found strong arguments for AIs interacting (due to personal bias) but not for the opposing side, to restrict AIs.

There is a resource shortage on the planet that they are living in, which may introduce new points of view of what to do with AI.

What would be arguments to restrict AI, let them interact, or other points of view for what to do with AI? What are possible arguments that might be proposed by the wealthy, the poor, humans, animals that are discriminated against, the AI itself, other groups, or any combination of the above? It is important to see the views of a variety of groups.

usr_profile.png

User

( 4 months ago )

Because you don't know what the AI has learned.

Advanced General Intelligence at human level is actually much more capable than humans. The problem lies in the speed they think. Current computers which are "dumber" than humans far exceed us in certain tasks, like many computational powers. An AGI at human level intelligence would actually be as capable of hundreds if not thousands of perfectly synchronous working humans of that intelligence.

To create such an intelligence you do not code every single bit, but you write learning programs and give it tasks that will teach it something. The problem is that you aren't sure what it really learns, and how new information might change that information. Your AI might be incapable of killing any human, but after it was tasked with disaster relief and had learned it needed to make choices between which groups it would offered aid and which would be condemned to high death rates it could apply it elsewhere... And you'll never see it coming. It could decide that kicking a few hundredthousand people out of their homes and relocating them where a large portion will likely die just to build something where they lived.

This is one of the reasons why I think AGI's should all be put in a box and have a few more next to it. You feed the problem to the box, then first compare the answers, take the most common answer and analyse the hell out of it. Then either execute the answers manually or through a more crude AI that you can trust.

what's your interest


forum_ban8_5d8c5fd7cf6f7.gif